Definitely agree. But 11ty’s core dependencies are relatively small (compared to other NodeJS things), and I am personally contributing PRs for getting rid of some long ljharb-y dependency chains.
I never get this desire for micro services. You IDE can help if there are 500 functions, but nothing would help you if you have 500 micro services. Almost no one fully understands such a system. Is is hard to argue who parts of code are unused. And large scale refactoring is impossible.
The upside seems to be some mythical infinite scalability which will collapse under such positive feedback loops.
The point of microservices is not technical, it's so that the deployment- and repository ownership structure matches your organization structure, and that clear lines are drawn between responsibilities.
its also easier to find devs that have the skills to create and maintain thin services than a large complicated monolith, despite the difficulties found when having to debug a constellation of microservices during a crisis.
For the folks who downvoted this - why? I hire developers and this is the absolute truth of the matter.
You can get away with hiring devs able to only debug their little micro empire so long as you can retain some super senior rockstar level folks able to see the big picture when it inevitably breaks down in production under load. These skills are becoming rarer by the day, when they used to be nearly table stakes for a “senior” dev.
Microservices have their place, but many times you can see that it’s simply developers saying “not my problem” to the actual hard business case things.
You need those senior folks who can see the big picture, whether you use monoliths or microservices.
The real benefit of a microservice is that it's easier to see the interactions, because you can't call into some random and unexpected part of the codebase...or at least it's much harder to do something that's not noticeable like that.
If there are network problems everything fails anyway, so it's not really an issue in production.
In the end, it depends on your skillsets. Most developers can't deal with a lot of complexity, and a monolith is the simplest way to program. They also can't really deal with scale, and cost of learning how to build a real distributed system is high...and the chances you'll hit scale are low.
So instead people scale horizontally or vertically, with ridiculously complicated tools like k8. K8 basically exists outside of google because developers can't write scalable apps, whether monolithic or microservice-based.
My interpretation of Conways Law is that social problems) in development organizations) are isomorphic to (gross) technical problems, and that leverage works both directions.
Btw, important factor: you can only see the big picture properly if you co-created the setup. Hiring senior rockstars as a reaction to problems will satisfy some short-term goals but not solve the problems overall
It's easier to find people who are confident that they understand a microservice, but the fact is that it interacts with the system as a whole and much of that interaction is dark matter. It's unknown unknowns that lead to Dunning-Kruger. People looking at a large system have more known unknowns and are less likely to be overconfident to the same degree.
Also we need to have about 5x as many people graduating with formal classes in distributed computing as are now or have been for the last several decades and it's just ridiculous how many people have to learn this stuff on their own. Distributed debugging is really had when you don't understand the fundamental problems.
I prefer mid-scale services. For a given app, there shouldn’t be more than 20-30 of them (and preferably around 10). Each will still have clean ownership and single responsibility, but the chaotic quadratic network effect will hopefully not get out of control. Cleanly defined protocols become a necessity though.
I think the dream is that you can reason locally. I'm not convinced that it actually help any, but the dream is that having everything as services, complete with external boundaries and enforced constraints, you're able to more accurately reason about the orchestration of services. It's hard to reason about your order flow if half if it depends on some implicit procedure that's part of your shopping cart.
The business I'm part of isn't really after "scalable" technology, so that might color my opinion, but a lot of the arguments for microservices I hear from my colleagues are actually benefits of modular programs. Those two have just become synonyms in their minds.
> […] the dream is that having everything as services, […], you're able to more accurately reason about the orchestration of services.
Well.. I mean that’s an entirely circular point. Maybe you mean something else? That you can individually deploy and roll back different functionality that belong to a team? There’s some appeal for operations yeah.
> but a lot of the arguments for microservices I hear from my colleagues are actually benefits of modular programs
Yes I mean from a development perspective a library call is far, far superior to an http call. It is much more performant and orders of magnitude easier to reason about since the caller and callee are running the same version of the code. That means that breaking changes is a refactor and single commit, whereas with a service boundary you need a whole migration.
You can’t avoid services altogether, like say external services like a payment portal by a completely different company. But to deliberately create more of these expensive boundaries for no reason, within the same small org or team, is madness, imo.
> That means that breaking changes is a refactor and single commit, whereas with a service boundary you need a whole migration.
This decoupling-of-updates-across-a-call-boundary is one of the key reasons why I _prefer_ microservices. Monoliths _force_ you to update your caller and callee at the same time, which appears attractive when they are 1-1 but becomes prohibitively difficult when there are multiple callers of the same logic - changes take longer and longer to be approved, and you drift further from CD. Microservices allow you to gradually roll out a change across the company at an appropriate rate - the new logic can be provided at a different endpoint for early adopters, and other consumers can gradually migrate to it as they are encouraged or compelled to do so.
Similarly with updates to cross-cutting concerns. Say there's a breaking change to your logging or testing framework, or an encryption library, or something like that. You can force all your teams to down tools and to synchronize in collaborating on one monster commit to The Monolith that will update everything at once - or you can tell everyone to update their own microservices, at their own pace (but by a given deadline, if InfoSec so demands), without blocking each other. Making _and testing and deploying_ one large commit containing lots of changes is, counter-intuitively, much harder than making lots of small commits containing the same quantity of actual change - your IDE can find-and-replace easily across the monorepo, but most updates due to breaking changes require human intervention and cannot be scripted. The ability for different microservices within the same company to consume different versions of the same utility library at the same time (as they are gradually, independently, updated) is a _benefit_, not a drawback.
> a library call[...]is much more performant [...than] these expensive boundaries
I mean, no argument here - but latency tends to be excessively sought by developers, beyond the point of actual experience improvement. If it's your limiting factor, then by all means look for ways to improve it - but designing for fast development and deployment has paid far greater dividends, in my experience, than overly optimizing for latency.
> Monoliths _force_ you to update your caller and callee at the same time
It's possible to migrate method calls incrementally (create a new method or add a parameter). In large codebases, it's necessary to migrate incrementally. The techniques overlap those of changing an RPC method.
You can absolutely reason locally with libraries. A library has an API that defines its boundary. And you can enforce that only the API can be called from the main application.
>The upside seems to be some mythical infinite scalability which will collapse under such positive feedback loops.
Unless I misunderstand something here, they say pretty early in the article that they didn't have autoscaling configured for the service in question and there is no indication they scaled up the number of replicas manually after the downtime to account for the accumulated backlog of requests. So, in my mind, of course there can be no infinite, or really any, scalability if the service isn't allowed to scale...
I’ve seen monumental engineering effort go into managing systems because for one reason or another people refused to use (or properly configure) autoscaling.
> You IDE can help if there are 500 functions, but nothing would help you if you have 500 micro services.
Using micro-services doesn't mean you're using individual repositories and projects for each one. The best approach I've seen is one repo, with inter-linked packages/assemblies (lingo can vary depending on the language).
A monolith with N libraries (instead of N microservices) work so much better in my experience. You avoid the networking overhead, and the complexity of reasoning about all the possible ways N microservices will behave when one or more microservices crash.
What you are describing, where 1 function = 1 service, is serverless architectures. The "ideal" with any service (micro or macro) is to get it so that it maximises richness of functionality over scale of API.
The concepts here apply to any client-server networking setup. Monoliths could still have web clients, native apps, IOT sensors, third party APIs, databases, etc.
The real reason is that it's impossible to safely upgrade a dependency in Python. And by the time you realise this you're probably already committed to building your system in Python (for mostly good reasons). So the only way to get access to new functionality is to break off parts of your system into new deployables that can have new versions of your dependencies, and you keep doing this forever.
JavaScript is very fast if you want/need it to be. Emscripten/asm.js approach is in the same category as native for example. Your simple code won't be too fast, but not all code needs to be.
Mostly because Linus won't accept anything other than C on his beloved kernel.
Sun had experimental support for Java drivers on Solaris, Android has support for writing drivers in Java, Android Things only allowed for Java written drivers.
Kotlin’s decision to make every exception runtime is the main reason I don’t use Kotlin. It’s especially baffling that they realized the issue with implicit nullability and got rid of it (though not with Java methods, which opens another can of worms), then went and intoduced implicit “exceptionality”.
The correct way to deal with Java’s checked exceptions would have been introducing a Result type, or, preferably, type algebra, like in TypeScript, so something like:
fun openFile(fileName: String): File | FileNotFoundException {…}
Then you could handle it similarly to null:
val content: String | FileNotFoundException = openFile(“myfile.txt”).?read()
…then you have to do a check before you use content as a String…
or
val content: String = openFile(“myfile.txt”)?.read() ?: “File not found”
(There could also be other possible ways of handling the exception, like return on exception, jump to a handling block, and so on.)
In fact, null should be treated as an exceptional value the same way all exceptions should be.
The ergonomics of checked exceptions may be debatable but compared to golangs explicit error handling at essentially each function call is definitely worse.
Yeah, I've got a lot of Java experience and a wee bit of Go language experience and I agree with you. I like Go in almost every way except for it's error handling. It's just wrong to have to check every goddam function one by one
I guess that's the reason why most Java programs I use cannot do proper user-facing error messages. Because it is so easy to just ignore error handling. The exception will be caught by the top-level, right? This is how almost every Java cli tool prints a stacktrace on even most trivial user errors like file not found.
Having to deal with errors and forcing the developer to do proper error handling is a good thing.
I don't mind Go's errors.
I do mind the complete lack of hierarchy in that.
Java's exceptions are hierarchical.
I can create an exception that specializes from `IOException` and that I feel is really powerful.
Go added this half-baked and much later.
So, most FOSS libraries don't support it yet.
Both Java checked and unchecked exceptions are inferior to signaling errors by return values like Go/Rust/Haskell do.
Exceptions are not composable, cannot be generic, and it is not visible in the source code which lines can throw, so every line is a potential branching point.
This isn't even a joke. Technology in software development has been progressing horizontally. Nothing is improving but abstractions and patterns are changing constantly.
Engineers need something to talk about in their OKRs. One strategy is to refactor things into microservices.
> Scale is measured in millions of instances: On 2022/12/21, the microservice topology contained 18,500 active services and over 12 million service instances.
They do go on to say that microservice as a concept is poorly defined. Are you suggesting that they mostly have 18500 of what we’d normally consider monoliths?
I thought Google at least was a bunch of micro services which is why they needed something like Borg and eventually open sourced a version of it called K8s.
Total BS.
Google has tons of services.
They are lots of them but they don't have a `fileRead` service and `fileWrite` service. Rather, they have a `gfs` service that can read/write/modify etc. everything related to files.