Hacker News new | past | comments | ask | show | jobs | submit login
Microservices: Why Are We Doing This? (michaeldehaan.substack.com)
304 points by htunnicliff on March 21, 2022 | hide | past | favorite | 316 comments



Honestly, we originally did microservices because it sounded like a fun idea and because it would look really cool in our marketing materials. At the time, this was a very shiny new word that even our non-tech customers were dazzled by.

As oxidation and reality set in, we realized the shiny thing was actually a horrific distraction from our underlying business needs. We lost 2 important customers because we were playing type checking games across JSON wire protocols instead of doing actual work. Why spend all that money for an expensive workstation if you are going to do all the basic bullshit in your own brain?

We are now back into a monolithic software stack. We also use a monorepo, which is an obvious pairing with this grain. Some days we joke as a team about the days where we'd have to go check for issues or API contract mismatches on 9+ repositories. Now, when someone says "Issue/PR #12842" or provides a commit hash, we know precisely what that means and where to go to deal with it.

Monolithic software is better in literally every way if you can figure out how to work together as a team on a shared codebase. Absolutely no software product should start as a distributed cloud special. You wait until it becomes essential to the business and even then, only consider it with intense disdain as a technology expert.


The problem is that there are so many developers now who have never had any experience of anything that isn't some botched attempt at microservices. The idea that it's possible to encapsulate code and separate concerns in any other way is foreign to them, and an "API" to them is 100% synonymous with a rest/grpc interface. So there's nothing for them to revert to and they are doomed to repeat this pattern, clearly with the impression that this is what app development is.

Meanwhile a lot of the industry is trying to tell them that their problem is they haven't separated things enough and should be using lambdas for everything.


> So there's nothing for them to revert to and they are doomed to repeat this pattern, clearly with the impression that this is what app development is

The only practical solution I have found for this: Carefully select a Padawan and guide them gently away from the forces of evil until they have gained enough situational awareness to spot these patterns and defend themselves. Not everyone can be saved, and at this point I fear that it is a majority.

If someone goes too far into cloud insanity, there is (in my experience) very little you can do to bring them back down to reality. At least, not on time frames the business owners seemed interested in when looking at new hires. I have had a much easier time taking in someone totally green and getting them happy/productive on monolithic software than I have with uService/AWS-certified 'experts' (et. al.).


I tried that for a while but now my reviews are basically:

"No, stop, don't."

https://youtu.be/uVdDXeYM4ss

It's like I'm talking to children, they think I'm just old and stuck in my ways and that I just don't understand.

Eventually my prediction comes true but none of them have ever said anything, they just go implement what I suggested and act like they solved the world's greatest mystery.

I guess it's easy to forget my comments from months earlier, it's almost as if I've seen their code before and know how it ends.



This is happening to me, its driving me nuts. Any advice on how to get out of it?


One thing that has periodically worked for me is the application of some fun infographic-tier latency figures to really drive home the argument for why distributed anything generally sucks ass. I.e. Would you rather that customer transaction either be:

1) A direct method invocation resolved within the same L1 cache contents.

or

2) One quick network hop just 5 milliseconds away?

Assuming worst case processing semantics (global total order), you would be able to process ~10 million times more customer transactions per unit time with option 1 vs option 2. This is seven orders of magnitude.

In my experience, not a whole lot is truly worst case, but most complicated & important business systems (banking/finance/inventory/logistics/crm/etc.) are pretty close if you don't want to be chasing temporal rabbits around all day.


I worked at a place that had a microservice for validations. You made a request with the item (singular) to be validated and the rule to validate it by. The most common validations were things like "is this value greater than 15" or "are these values equal".

Nobody else saw the problem here. Establishing a http connection, serializing a JSON document, deseralizing it on the other side, then doing "14 == 15" anyway before going back the way it came. Lighting up a hundred thousand lines of code vs one.

This was done as it iterated over large files, millions of items, millions of requests.

They were genuinely confused why it was slow.


Your story both makes me want to laugh and cry because I've seen this pattern so many times.

These days when I do architecture interviews, I always make sure the problem can be solved distributed or monolithic and when they do distributed, I asked them why and to explain the trade-offs. If they do monolithic, I ask them why not distributed. Etc. For senior engineers, it's such a good indicator of whether or not they've truly worked in distributed systems and understand the human side of the problem.

The big O stuff is great, but the human and organizational costs are easily as important to assess as the computational and memory costs.


I am sure "ValidationService" made perfect sense to everyone the first day it was dreamed up. IIRC we had one named that too for a while back when we were lost in the dark forest.


This is actually why I think we need more cloud and not less. The problem is poorly designed code, not monolith or microservice.

For example, ValidationService as described above could be replaced with OpenPolicyAgent and would scale much better.

That said, I could see someone pointing at OpenPolicyAgent and asking why they didn’t separate distributed updates from the policy spec as two distinct projects, this way you could re-use parts of OPA to distribute config files or feature flags. You can now too, but it requires the extra hop of compiling a function to answer.

Come to think of it, distributed state updating is roughly the same general problem that Kubernetes Operator pattern solves.

Another way of putting it, we have cloud primitives but not enough of them, maybe - or they aren’t well enough explained - to make the choices or restrictions of different architectures a bit more obvious? I especially look forward to a future when we get “design systems” for the programming we do with business logic in the cloud.

I’m over-emphasizing cloud here. I mean code you don’t directly have to maintain and write regardless of where it runs or who wrote it.


Scaling is not the issue


Perhaps I didn’t explain myself fully. OpenPolicyAgent distributes the policy so you can integrate[1] it with systems that need to ask for validation such that lookups are constant time and performed on the same machine but distributed - scaled - using code and data shipped to every consumer/integrator of the validation service, instead of making a separate call to validate. So you can have centralized control of your validation logic and data but have it run as part of a decentralized application or on the same host as part of a pod. That is what I meant by a design that “scales”.

That said, depending on the type of validation performed, zero trust systems for example are often built using centralized validation endpoints, so it’s not entirely a bad practice.

1: https://www.openpolicyagent.org/docs/latest/integration/


Reading your comments I get the sense that you consider system architecture to be reducible to 2 possible choices (monolith and m.s.) and you are confusing distributed systems with micro-service architecture. Would you consider a classic 3-tier (api - domain logic - database) to be a “distributed” system? Are micro-service systems “distributed”? (The answer is no to both, btw).

Micro-services were promoted for a variety of reasons. One of which was facilitating rolling out projects by consulting companies given the economic incentives for such businesses. Related was the VC driven development model that required rolling out features at greater speed than that of thoughtful development. Another was (remains?) the prevalent lack of competence in designing schemas (required for the basic n-tier system culminating in the database tier).


My advice would be to carefully construct an explanations of the scenarios when microservices do apply, and then explain why you are not currently in that situation. the best microservice examples I've ever found were for the addition of features to legacy systems, the ability to write minor additional things in alternate languages, and most critically, when the original code base couldn't be changed or was lost, and yet more functionality was required. There are valid reasons to add a microservice there are probably not good reasons to take a normal code base made by a modern company and completely shift to only microservices.


>My advice would be to carefully construct an explanations of the scenarios when microservices do apply, and then explain why you are not currently in that situation. I tried that once. Didn't work. End result was just pure pain.


Couldn't agree more - just like NoSQL vs regular old SQL - don't assume you need a NoSQL solution, until you actually prove you need it. probably 90-97% of solutions will be better off with a relational database; boring yes, but they 'just work'. Choose NoSQL only when you need them, choose micro services the same way.

If you 'think' you need NoSQL or a MicroService architecture - chances are you don't.


I don’t think I need NoSQL. It’s just that I hate having to map my structured data into tables and rows and joins and schemas and primary keys. A document|object|key-value store just usually maps better to the data I’m trying to model than a trumped up spreadsheet that requires me to speak a foreign language to it. Other’s mileage may vary of course. Just been my experience in the domains I’ve worked in.

(Ironically, I agree very much with your general sentiment, just not the example of illustration you chose)


You can do those in most SQL databases too, with the option of structured data if some of your data fits it


> having to map my structured data into tables and rows and joins and schemas and primary keys

You don't have to. You can just use JSONB.


I wish I could justify using SQL more. It's just way too easy to use Dynamo. Unless my data is complicated enough to warrant a relational model, it's going in Dynamo.


I don't disagree, it is so easy to build solutions on tp of dynamodb - and then I regret it when the access patterns change and the solution falls apart -thats when I wish I had stuck with tried-and-true relational.

That is kind of the problem, DynamoDB - and other NoSQL database - allow you to just start building your app, without any real thought about what data you will need and how it will be used - but you are often shooting yourself in the foot to not map this stuff properly out in advance.


> don't assume you need a NoSQL solution

Likewise, people who only know relational DBs shouldn't assume they need an SQL solution. It's remarkably rare that you encounter engineers with real thoughtful experience with both and they have the experience to know which tool to pick up for different problems.


I think one mistake is using the term “monolithic” in comparison to “microservices”. To a whole new generation of programmers (and a fair proportion of older ones also) it has a negative meaning somewhere approaching “a huge and growing mess of smelly unstructured, non-modular code”. It is a negative term invented to compare against the glory that be “microservices”.

I prefer to use the term “Local Modular Application” in lieu of anything better?


To some extent I think "monolith" is actually changing connotation as people realize that microservices are not a silver bullet. For example, Shopify calls their system a "modular monolith":

https://shopify.engineering/deconstructing-monolith-designin...


What about just "service" or "web service"?


Over the past 10 years dev tools (eg. git, IDEs, laptops) have also become more performant by some small constant factor, 2-3x, which isn't massive in comparison to Moore's law stuff, but means that a larger fraction of new software projects can start monolithic and stay monolithic for longer.

Unless you're handling millions of lines of code and 1000s of developers, you won't need to abandon a monorepo or commit to expensive custom dev-platform investments.


> Unless you're handling millions of lines of code

When including our vendor's WSDL generated reference sources, we are well in excess of 1mm lines of code. I think even omitting that we are very close.

I cannot imagine us ever getting too big in those terms. Our full checkout is ~400 megs right now. If some day github says we are too big and need to break shit up, Ill move us to perforce or something even more ridiculous.

Our main product does a full re-build in ~45 seconds on a powerful developer machine. Incrementals for more likely work areas are closer to 8 seconds. Full monorepo checkbuild takes about 5 minutes. For reference, we do use .NET so JIT is an ally here.


> full re-build in ~45 seconds

That's cute. How about the test suites?


Likely faster, less flaky, less complicated setup and higher coverage than the integration test suite of a set of microservices.


Using Go stack here. Can confirm it can be done. We're at 8s to build and another 15s to run full test suite.


If a repo needs to be broken up for performance or organisational reasons, is there really a reason to prefer to integrate the parts as microservices rather than as libraries?


The main benefit of microservices there is that they can be independently deployed. If you have libs into a monolith you have to redeploy the entire thing, which may involve coordination.

I still mostly prefer the libs + monolith approach though.


10 years ago SSDs weren't common, and they are 10x performance improvement alone.


I go back and forth on this. Having a single service is definitely much easier. However, consider the following scenario: you want an application that asynchronously ingests data from different sources (whether they are files, streams, database stores, etc.), perform some computations and aggregations, store them to a local DB, then allow the user to display the result. Would it make sense to separate out the ingestion code into its own service, that calls out to another service that is responsible for storing to the local DB? The rationale is that writing to the DB is now controlled through one service (almost like a queue, where connections could be controlled), and the ingestion could be scaled up/down independently of the DB write service. Thoughts?


Your problem was using JSON instead of a typed language like Protocol Buffers or even Java RPCs, and that you were using separate repos for everything.

Microservices don't even have to run in separate binaries, let along separate machines.


> Microservices don't even have to run in separate binaries, let along separate machines.

If you're not running separate binaries, what you have is a monolith with well-defined module boundaries, not a set of microservices.


Which is actually much better than a distributed monolith (random microservices thrown together just because it ticks some boxes on managements resumes)


Agreed! I think a well-structured monolith is what most projects call for.

What I don't want is for people to buy the microservices hype so thoroughly that they start thinking "microservices" just means "well-architected system".


I guess a binary is pretty vague. They could mean they deploy a single container image or something.


If you have a single deployment unit, you have a monolith, regardless of how that deployment unit is structured on the inside. There's a lot of fuzziness about what "microservices" means, but some degree of independent deployability is pretty non-negotiable.


You can have of course have a single binary deployed with different versions for different services.

More commonly the binaries would differ but share a lot of code in a library of the monorepo.


I was fortunate to have a similar experience early on. That being said, I still firmly believe there are cases where "microservices" are appropriate, such as when you married to a specialized framework that only supports Python 3.5 but you want to use modern tooling for everything else as much as possible :)


I think this is a fair point. If your reasoning for developing multiple services is to shim legacy services, that is perfectly acceptable in my view. It's when you intend to start from zero with these things that the problems begin.

I would say if you begin a journey to shim legacy, it should also be a journey to entirely deprecate it. You can gradually consume an old monolith with a new one over time if you are deliberate enough.


Yes obviously there are use cases where microservices is a better architecture, but the point is it's very rare and most teams/companies never need it, because it would complicate their life unnecessarily.


I'm a huge proponent of microservices, having worked on one of the earliest and largest ones in the cloud. And I absolutely think that they provide huge advantages to large companies -- smaller teams, easier releases, independently scaling, separation of concerns, a different security posture that I personally think is easier to secure, and so on.

It's not a surprise that most younger large enterprises use microservices, with Google being a notable exception. Google however has spent 10s, possibly 100s of millions of dollars on building tooling to make that possible (possibly even more than a billion dollars!).

All that being said, every startup I advise I tell them don't do microservices at the start. Build your monolith with clean hard edges between modules and functions so that it will be easier later, but build a monolith until you get big enough that microservices is actually a win.


I saw lots of churn working on microservices that were pre-production. When it’s like this, things are more tightly coupled than the microservice concept would have you believe and that causes additional work. Instead of writing a new function at a higher version, you had to go change existing ones - pretty much the same workflow as a monolith but now in separate code bases. And there wasn’t a need for any of these microservices to go to production before the front end product, so we couldn’t start incrementing the versioning for the API endpoints to avoid changing existing functions. A monolith almost doesn’t need API versioning for itself (usually libraries do that), but it’s effectively a version 1.0 contract if translated to microservices.


In what way is Google an exception? Are you saying that only one or a small number of different binaries run on Google's production servers?

Google famously has a monorepo but that's different from a monolithic service architecture.


Yes, that is a fair distinction that I simplified over. You don't really get a lot of the gains of microservices if you're using a monorepo, so while they do have multiple binaries/services, you still have to check into a single repo and wait for all the tests/etc. To be fair I haven't visited Google in a while and maybe it's changed now, but at least decade ago it was very different from how everyone else did microservices.


> You don't really get a lot of the gains of microservices if you're using a monorepo

I think the two are completely orthogonal.

At Google, when you check in code, it tests against things it could have broken. Not all tests in the system. For most services, that means just testing the service. For infrastructure code, then you have to test many services.


It seems things have changed since I last looked at how Google does deployments. Back then, every test ran on every checkin to the mainline, and all code was checked into the mainline. It even talks about that in the Google SRE book.


I think you were misunderstanding something. Why would every code change cause a compile and test across the entire company? That is to say; not only does that not scale, it’s totally unnecessary*. Only the downstream consumers of a change are rebuilt and tested, like you’d expect (see: bazel and the monstrous makefile before that). In this sense, the fact that Google uses a monorepo is mostly an implementation detail. It has some impact on the company’s workflows and tooling, but not its software architecture.

* unless you’re changing a very common dependency, of course, and Google has tooling for this.


Hermetic builds allow you to cache your builds and your test executions in such a way that running all builds and all tests for every commit is indistinguishable from executing only the builds and tests that you could have affected


I've been at Google for almost 12 years. It's been this way the whole time I've been there.


Even if that were true (and it is not true), the non-dependent tests would finish in zero time because the results are cached and hashed by dependency tree.


I assumed that was always the case and why it made sense that they run all tests.


Each releasable unit can wait for whatever tests you want. Usually it's just the tests for that unit. Google is actually a good example of why monolith/microservices is a completely different concept to monorepo/multirepo.

I.e. you can put your monolith in multiple repos, and you can put 100,000+ services in 1 repo.


The past few companies I was at, we discussed whether we wanted a single or multiple repos. But that was a separate conversation from microservices, so I don't think its unusual to have a monorepo with microservices.


> they provide huge advantages to large companies […] every startup I advise I tell them don't do microservices at the start.

I think you nailed it. Microservices are a solution for organizational problems that arise when the company grow in size, unfortunately it's not rare to see small startups with a handful of engineers and 5 to 10 times more services…


I remember Amazon 15 year ago when a newly hired Senior Principal (Geoff something? from Sun?) complained to us about having more services than engineers.


Strongly agree with this. It's about leaning into Conway's law. How micro the services get is a variable, for sure, but it's definitely worth considering as partly technical and principally an organizational problem.

With good defaults, you can have a dev tools / platform team create a blessed path that most teams will easily adopt so you get a mostly standardized internal architecture (useful for mobility). It's harder to allow for lessons learned from one service team to transition to the org as a whole, but if the dev tools / platform team has great Principal SWEs, it'll work. It does mean that you need great people on the platform team, though, since mediocre people will attempt to freeze development to fixed toolchains and will be unable to see the big picture.

I think Amazon does a good job with their Principals here.


> Build your monolith with clean hard edges between modules and functions so that it will be easier later,

This is unfortunately very easy to override. Oh the rants I could write. If I could go back in time we would've put in a ton of extra linting steps to prevent people casually turning private things public* and tying dependencies across the stack. The worst is when someone lets loose a junior dev who finds a bunch of similar looking code in unrelated modules and decides it needs to be DRY. And of course nobody will say no because it contradicts dogma. Oh and the shit that ended up in the cookies... still suffering it a decade later.

*This is a lot better with [micro]services but now the code cowboys talk you into letting them connect directly to your DB.


I used to be afraid of that, but I've found that it is possible to solve this problem with Code Ownership rules in Github.

(Which sometimes you also need even when using microservices, if you use a monorepo, for example)


>...you get big enough that microservices is actually a win.

Can you speak more about the criteria here?

You may be implying that microservices enforce Conway's law. If so then when the monolith divides, it "gives away" some of it's API to another name, such that the new node has it's own endpoints. This named set is adopted by a team, and evolves separately from that point on, according to cost/revenue. The team and its microservice form a semi-autonomous unit, in theory able to evolve faster in relative isolation from the original.

The problem from the capital perspective is that you get a bazillion bespoke developer experiences, all good and bad in their unique and special ways, which means that the personal dev experience will matter, a guide in the wilderness who's lived there for years. The more tools are required to run a typical DX, the more tightly coupled the service will be to the developers who built it. This generally favors the developer, which may also explain why the architecture is popular.


The first part of your comment is accurate (and beautifully poetic). But I don't believe the second part follows from the first.

At most companies that do microservices well, they have a dedicated platform team that builds tools specifically for building microservices. This includes things like deployment, canaries, data storage, data pipelines, caching, libraries for service discovery and connections, etc.

This leaves the teams building the services focusing on business logic while having similar developer experiences. The code might use different conventions internally and even different languages, but they all interact with the larger ecosystem in the same way, so that devs at the company can move around to different services with ease, and onboarding is similar throughout.


>they all interact with the larger ecosystem in the same way, so that devs at the company can move around to different services with ease, and onboarding is similar throughout.

But big enterprises inevitably lose "stack coherence" over time, through drift but also acquisitions. Finding the lowest common denomenator to operate and modify it all, while maintaining a high level of service (uptime, security, data integrity, privacy, value), turns out to be a tricky problem - just defining the product categories is a tricky problem!

Well I for one would love to see such a thing properly functioning. I've seen two attempts, but neither were successful.


Stack coherence is even worse when your dependencies are all different companies!


> Build your monolith with clean hard edges between modules and functions so that it will be easier later, but build a monolith until you get big enough that microservices is actually a win.

I'd like to see software ecosystems that make it possible to develop an application that seems like a monolith to work with (single repository, manageable within a seamless code editing environment, with tests that run across application modules) and yet has the same deployment, monitoring and scale up/out benefits that microservices have.

Ensuring that the small-team benefits would continue to exist (comparative to 'traditional' microservices) in that kind of platform could be a challenge -- it's a question of sensibly laying out the application architecture to match the social/organizational structure, and for each of those to be cohesive and effective.


Isn’t the erlang ecosystem pretty much this?


Perhaps it's time I learned some erlang :)


It's never a bad time to learn some Erlang.

https://gist.github.com/macintux/6349828


Thank you!


Dropbox seems to have done something very close to what you describe.

https://dropbox.tech/infrastructure/atlas--our-journey-from-...


OSGi


Going from working at a company with an engineering team of 40 odd engineers to a company with thousands of engineers where the small company was trying to move towards microservices and it was just really slowing us down and the large company was in a hybrid mode of still having a couple of monoliths and a lot of microservices I could definitely appreciate that there is very much a scale at which it absolutely makes sense for the engineering organisation to use microservice and very much is a scale below which it's fairly counter-productive.


Why would you advocate for microservices over services?


I grew up using unix where the philosophy is "do one thing and do it well" and I think that carries over well into microservices.

But honestly I'm not sure there is much of a line between the two. I've seen microservices that just return True/False and ones that return 100 lines of json, which are arguably more web-services than microservices.

I honestly think it's a distinction without meaning.


The "do one thing and do it well" Unix philosophy already broke down decades ago when people started adding things like "cat -v" and ability to sort ls and whatnot. People like Doug McIlroy still argue that's all "useless bloat". Pretty much the entire rest of the world disagrees. The point is that "do one thing and do it well" doesn't actually work all that well in reality.

A CLI not a service: there is no operational complexity to "keep things running" in a CLI: you just chain some things together with pipes and that's that. The nice thing about that is that the text interface is generic and you can do things the original authors never thought of. With microservices this usually isn't the case and things are extremely specific. This is also why "do one thing and do it well" doesn't really carry over very well to GUIs.

A lot of microservices I've seen are just functions calls, but with the extra steps of the network stack, gRPC, etc. Some would argue that this is "doing microservices wrong" – and I'd agree – but the reality of the matter is that this is how most people are actually using microservices, and that this is what microservices mean to many people today.

Instead of "microservices" we need to think about "event-driven logic", or something like that. Currently the industry is absolutely obsessed with how you run things, rather than how you design things.


" Hi ___, i saw your profile on linkedin and wanted to reach to say our team is looking to hire a 'Senior Boolean Microservice Architect' "


I don't get this idea that modularity and composition can only be achieved with separate processes communicating over a network. There are features within modern programming languages for achieving the same.


I don't think anyone is saying they can only be achieved with separate processes. It's more about independent scaling of the different parts of the system that is the big advantage. That plus the organizational advantages. It's a lot easier to maintain the modularity if you have different groups of engineers working on different services.


It's not that it can't be done, it's just the most consistent way to enforce that across teams that have minimal communication across each other.


it really is a distinction only made by people trying to sell you something or sell you on something. service-oriented architecture is leveraging the power and the curse of being able to connect computers over a network to solve foundational scaling limits of hardware. How granular you want to make things is a design decision.


What's the difference?


The attitude. The "micro" of microservices belies a religious zeal that more and smaller services is an unmitigated awesome thing and we should always strive for more of it. "Services" is for people who think it is a necessary evil, to be deployed under the right circumstances.


I see. I guess I've been lucky enough to be on teams that haven't thought of microservices that way.


I find the organizational arguments to be pretty convincing, but surely there must be a way to reap these rewards in a monolithic infra setup as well? Maybe someone should develop a "monolith microservice architecture" where all the services are essentially (and enforced to be) isolated, but once deployed is built like a single unit.

You could do it with docker-compose I guess, but optimally your end result would be a single portable application.


"monolith microservice architecture" = libraries

all of your foo-service json endpoints can be lib-foo apis (the original meaning!)


I suppose that's true in a way. There would have to be some serious scaffolding for it to work though. For a web service, for example, libraries would have to be able to register route handlers and such that they handle independently. Perhaps they could all be initialized in a common way using DI or something from a base gateway application. Versioning and interop testing would have to be figured out in some clever way.

Something like this would be the architecture I imagine.


> Versioning and interop testing would have to be figured out in some clever way.

But given that we already use libraries for everything, all of that stuff has already been figured out.


In my experience, for regular libraries, you usually want to pin version ranges and not always use the latest of every library. But when the main business logic and a whole team is dedicated to one of the "service libraries", whose responsibility is it to make sure that the version stays up to date across other service libraries? Do you leave it up to the main host app? Do you allow a multitude of service library versions? Do you skip versioning all together? In microservices you have an opaque facade in the form of an API that makes this a non-issue. Perhaps you'd want some simulacrum of this in the form of IPC between service libraries?

Do you use some sort of contract based testing between service libraries? Put all integration testing in the host application?

It's not obvious to me what the best approach would be.


Well, you have an API for libraries as well, and an even stronger one because you can have type guarantees, which you can't do with JSON (you can use gRPC etc but people usually don't).

With libraries it's easier than with microservices, because I can scan all the dependency files of all projects and immediately see which project relies on what library, which is much harder to do with microservices as a library author.

With those two things, and the fact that you can just keep using an older library version if you need to, whereas you can't easily keep using an older microservice version if it's been upgraded, I think libraries have lots of advantages in this.


I agree, there are probably a lot of advantages. It would be interesting to see a boilerplate structure for this workflow. Maybe it already exists in some form.


I can't imagine it being different from other libraries, bundle your code up in the language's supported way, publish it in some internal registry, add it to your dependencies, that should be about it.


It is definitely possible to reap a lot of those things with monoliths. Separating internal code using libraries (as the sibling poster said), using different server clusters to serve different routes, deploying those clusters independently if possible, using multiple databases, separating the applications in different "areas" maintained by different teams. The one thing microservices can do that monoliths can't, however, is allowing as many different languages.


> most younger large enterprises use microservices

Which ones? Amazon uses roughly 1-team-1-service, not 1-team-100-*micro*services.

Facebook famously built their main service as a monolith.

Edit: and don't get me wrong, I'm not saying services are bad - as long as they are the right size and with the right design rather than tiny.


Younger. Netflix, Dropbox, Stripe, Slack, Pinterest, Reddit is working on it, Smugmug, Thumbtack, a lot more I can't think of off the top of my head. Also I'm pretty sure Amazon has teams that maintain multiple services.


Then I'm glad I keep dodging the "cool and hip" hype-fueled companies.


Netflix, Dropbox, Slack, and Pinterest are all mature, profitable public companies. Not sure if they count as hype-fueled.

I'd say you're letting your biases blind you. Netflix writes mature robust code, mostly in Java, and uses microservices.


Amazon has more than one service per "2 pizza team".


I wrote "roughly" and literally made a comparison with 100 services per team.


The last microservice architecture I worked on consisted of 7 python repositories that shared one "standard library" repository. Something as simple as adding an additional argument to a function required a PR to 7 repos and sign off on each. When it came to release we had to do a mass docker swarm restart and spin up because the giant entanglement of micro-services was really just a monolithic uncontrolled Cthulhu in disguise.

The business revolved around filling out a form and PDF generations of said form. I felt like I got no work done in a year and so I left


So... you worked on something terrible that people called a "microservice architecture"? Once a pattern gets popular, people start writing nasty code in the style, and then the pattern takes the reputation hit and people move on to the next thing (or just back to the last thing). Rinse, repeat.

My company uses microservices; deploys restart one service and PRs are one repo at a time. There's a shared library, but it's versioned and there's nothing compelling you to keep on the bleeding edge.


Is this something a lot of people are missing with their microservice implementation? You need to be able to deploy each microservice independently of everyone else. If you have a change that's spread across multiple services, you should be able to just do them one-by-one in order. If you want to rollout a shared library, you should be able to just update your services one-by-one. If a coordinated rollout is required, then doesn't that kind of defeat the whole point of doing microservices?


> If a coordinated rollout is required, then doesn't that kind of defeat the whole point of doing microservices?

If a coordinated rollout is requires for anything but a change of service API (and then only the service and it's direct clients should be impacted, and even then a decent deprecation policy should eliminate the need for close coordination), you aren't doing microservices, because loose coupling is part of the definition of the pattern.


Its a big anti pattern I have seen that a change requires simultaneous merges on multiple repos and there is basically no way to avoid possible outages/broken systems in the middle period while its deploying.


> Once a pattern gets popular, people start writing nasty code in the style,

Once it gets popular, people start “implementing the pattern” based on rumor, bad descriptions from unqualified (and sometimes ill-motivated) intermediaries, and some boss and/or tech lead’s fever dreams, and blaming it on the pattern.

(If the pattern is useful enough, it will be later invented under a new name to escape from the accumulated cruft, which will briefly succeed only to succumb to the same process until another cycle passes.)


It there some sort of technical name for this phenomenon? Maybe like "pattern decay" or "technical decay"? From what I have observed in all throughout my career this happens to every pattern eventually. People that come up with it obviously understand it themselves, but they have to serialize their understanding into some format like a conference talk or a book, which is inherently lossy as it's not an implementation, people in the wider community then deserialize the original understanding and (try) build an implementation out of it thus providing a myriad of serialized (but wrong) understandings out in the wild that are then encountered by subsequent waves of engineers who then take the pattern to have a new definition than the originally intended one.


Its just cargo culting. A lot of programming advice/processes are fuzzy in a wider context. Generally they came to be in a specific context. Relating that context is hard. Eventually its picked up as a fad and spread around, just like a game of telephone we lose nuance and context. Pair this with imposter syndrome and low confidence, everyone tries to cover their ass and do what the current hype in the industry is doing.

What if I suggested an architecture where every 5th call had the overhead of serialization and network time. Also the reliability and concern issues that go with multi machine calls. Data syncing issues and network partition concerns.

Pretty sure most people couldn't reason about this system. They'd suggest instead of having network calls on every 5th call, lets really look at our use case and only insert them where we may have load issues and have to scale. Data state becomes a big concern, make sure we know where our data is at all times and is only passed when needed to be correct and not over pass it to be efficient.


Cargo culting. Of course. Derp! Why did that not immediately spring to mind???


> I worked on consisted of 7 python repositories that shared one "standard library" repository. Something as simple as adding an additional argument to a function required a PR to 7 repos and sign off on each

This is an engineering process failure, not a failure of microservices or shared dependencies. You should be versioning your shared library, that way you only need to make a deployment to the service that requires the update, leaving the others pegged at the previous version until a business or engineering need motivates the upgrade.


While we're at the topic of engineering process failures - is there any research/literature about software engineering in general that describes all the hows and whys of large-scale programming like that?


I have had similar experience building microservices that used shared repositories. The PR paperwork was so bad that at one point I've made all my services self-contained, just to avoid having to modify my own code in two different places and synchronize the changes.

The whole problem, I think, comes from the "split the code" cargo-cult. We need to think about why we're splitting the code, and use that why to figure out when to split code.

IMHO, code separation arises naturally from modular programming - once your code is mature enough, it becomes just a piece of glue around a set of libraries that you can just rip out and put in their own repos, provided that they're useful enough.


So, then you have been working on a distributed monolith and not a proper microservice architecture. Just saying.


but boy could your pdf generator system scale. Boy could it scale...


Except in my experience most attempts at microservices scale extremely poorly because they have nowhere near enough precision in their APIs to get just the information you need and so most applications end up fetching way, way more data than they need and whittling it down manually - and often having to stitch together the results from multiple calls.

That of course leads you down the path of creating an "oddly specific" API that caters exactly to the needs of the clients, down to the point that small changes in the client's needs require changes in the upstream service(s) - so your "separation of concerns" has become a joke.


Is it a common thing for "proper" microservices to have microservices retrieve data from other services via APIs?

I've worked on "microservices" systems before that do this. They were mostly shit.

The system I'm working on at the moment has each service subscribe to events and maintain its own database. The only comms is via events. It works pretty well and everything really is nicely separated, but it feels a bit haphazard.


One upside in dealing with this mess(and understanding all the intricacies, pains etc..) were in is being able to laugh at the kafkaesque situations people find themselves in now, like your story or other blogposts.


This is blowing my mind. I felt like you were specifically talking about my place of work. Turns out (based on your github link in your profile), you did work there haha!

The 7 repos are gone now and a k8s cluster has replaced the swarm service. Deployments are a bit easier to manage now. It's still such slow development process to add a simple API endpoint, especially if that API has to call other DAOs. It's crazy because I feel like it's completely normalized here that dev work takes 300% longer than it should for simple features.


Have you considered not sharing code between services?


Yes, I came to learn that microservices would probably greatly excel at sending JSON strings back and forth. Then I wondered why we weren't just all using Erlang


I don’t understand why I got the downvotes…

I was taught that sharing code between services should only be done in circumstances in which there is already a strong connection between them - for example by being owned by the same team.

In the organization where I initial learned how to deal with microservices even direct calls via HTTP between services was a no-go and only used in rare circumstances and if then only temporary.

I am not sure if this is the way to do it since I have a sample size of one - but we had about 500 services to manage and in the end it worked okay and I never saw something you described. That’s why I wanted to know if you have considered just not sharing code.


Regarding why you got the downvotes, I have a guess: "Have you (maybe) considered not ..." usually gets followed by something that's transparently stupid or incompetent or otherwise undesirable in the given context. For example: "Have you considered NOT publicly yelling at your coworkers?" Obviously that's not what you were saying, but it may have read like it.

It's a fantastic INDIRECT way of telling someone they did something stupid. And because stupid actions map so well onto their doers, it's also a fantastic way to insult people.

As an aside: this sort of indirection - asking a question to assert something - it's not peculiar to English, but it's suuper common in English-language expressions (which is, in fact, due to the habitual indirection of the English - at least according to a linguistics professor I once had.)

The kicker is: because it's an ironic construction, it’s likely going to require more effort to process: the presence of irony means that what is actually meant by the speaker is not encoded literally, but rather must be interpreted - derived from what WAS said - and irony (and other ways of flouting literal language) yields a disjunction (x || y || z). It puts the listener into a role where their recognition of intent and ability to joke around are tested - and the correct interpretation is up in the air. So while it's possible to say this expression to a friend when they screw up ("Have you considered NOT showing up to work drunk?"), it's also a great way to make an enemy ("Have you considered NOT being a fuckup that nobody likes?"). Same construction. (In fact, this later example is especially sinister, because any answer you give just makes YOU assert what you're being slandered with. "No" - I haven't considered not being a fuckup? / "Yes" - I did in fact consider not being a fuckup? You get the idea. It can be wielded in a very mean way.)

And your comment wasn't intended this way at all, and in fact it's obvious when you read it that straight talk and no irony is meant, but it shares enough of the signification with the ironic phrase that it sets off alarm bells. Especially in an online reply to someone, where trolling is sport! The context is working against you here.

And the context is the first tool we have when interpreting communication, because it's already available to our cognition before we receive any given message. And brains are energy misers, using heuristics to filter shit out that doesn't look like a good reward for the necessary energy expenditure.

So I bet that the downvoters saw 1) a short comment, and short is by the way much more likely to be low effort and unconstructive; and 2) all the native speakers' downvoting brains instantly recognized the same construction as the ironic idiom talked about above, and then they instantly said fuck it, this person is being a dick, without even really reading it. Since irony (unless it’s expected) requires more processing effort (both in detecting WHETHER irony is present AND what the ironic speaker means, just to get to the point where you can start assessing the semantic content), these two things were enough to justify jumping the interpretive gun. I wouldn't be surprised if your question was flagged by MOST native speakers’ brains’ asshole detection systems. And so, the actual substance of your question, which, processed in its entirety, have indicated that you were asking a bona fide, relevant, even microservices-essential question - got booted out. And then the overzealous downvoters succumbed to the urge to smash the downvote button without a second thought and never looked back.

I found myself in the same place, actually - thinking "they're being a dick," then moving on. Probably took less than a second. Often when people in online conversations say why the downvotes? they're trolls acting in bad faith, which pisses me off because I always go reread what they wrote. So I did. You weren't calling anyone stupid! Still, I had to stare at it a bit before I realized it was an unintentional collision with the syntax of sarcasm. Maybe like if I wrote "Was hast du denn?" and really truly meant "what do you have?" instead of "what's wrong with you?" If you don't recognize it for what it is, it might look like an honest question which expects an honest answer.

Oh. Another thing to consider: after a while spent in any given semiotic context, our cognition adjusts and those cognitive heuristics that reject hard things become even lazier. Our pattern of behavior when reading on screens starts with reading, but then it turns to skimming and scanning if given long enough. I'd bet money that if your brief, but bona fide contribution had been one of the first comments, your downvote percentage would be a LOT lower. Halfway down a long page, the skippers are skipping and the downvoters downvoting. That doesn't mean that they intended to or are even aware of the switch in reading modes. At the top of the page, information is still shiny and new, and worth interpreting, speculating on the speaker and their intentions, etc. After a while, nobody gives any comment a second chance.

Anyway, I don't get to revisit all the shit I learned at university enough, so it felt good to write all that. Are you still here? Maybe everyone's already moved the fuck on. Brain got bored? Would serve me right, ha! So, here I go, back to my job, where instead of working in cognitive linguistics, text semiotics, pragmatics, philosophy of language, etc., I'm writing boring-ass JavaScript all day, for an application ... being built on microservices.


Such a nice and enlightening explanation. Thank you.


The reason why "we" are doing this is because in a lot of cases, the tech is no longer used as a means to solve a business problem - instead, the tech is the end-game itself and complexity is a desired feature.

The market has been distorted by endless amounts of VC funding for very dubious ideas that would never be profitable to begin with, so now you have 2 solutions: you can spend a few hundred grand building a boring solution with a slim amount of engineers, realize it doesn't work and quit, or you can keep over-engineering indefinitely, keep raising more and more millions, enjoying the "startup founder" lifestyle while providing careers to unreasonable amounts of engineers with no end in sight because you're too busy over-engineering rather than "solving" the business problem, so the realization that the business isn’t viable never actually comes.

Which one do you pick? The market currently rewards the second option for all parties involved, so it's become the default choice. Worse, it’s been going on long enough that a lot of people in the industry consider this normal and don’t know any other way.

I've commented/ranted about this before, see https://news.ycombinator.com/item?id=30008257, https://news.ycombinator.com/item?id=24926060 and https://news.ycombinator.com/item?id=30272588.


I think this is a bit naive. Startup founders do not care about providing careers for engineers, and usually want to stop being a "startup founder" as soon as they possibly can - they are chasing huge exits and being a founder is incredibly stressful.

The real reason that microservices are so prevalent is that they became fashionable, for exactly the same reason a particular item or brand of clothing becomes fashionable - influential people were seen using them and so regular people aspired to start using them too. At a certain point in the popularity curve, _not_ using microservices becomes a controversial viewpoint.

They also suffer from being what I call "conceptual crack". There is a certain kind of idea that really tickles some kinds of engineer's brains. Microservices seem like a such a clean solution, each service having its own single responsibility, so easy to draw on a whiteboard, so neat and tidy. Other ideas I place in this category are blockchain, redux, and V=f(s). Clean and tidy ideas that are compelling conceptually but result in nightmarish levels of hidden complexity when they're put into practice.


> Startup founders do not care about providing careers for engineers, and usually want to stop being a "startup founder" as soon as they possibly can - they are chasing huge exits and being a founder is incredibly stressful.

Imagine you were the founder of a business you've now realised... isn't going to be the next Google.

Your product hasn't failed! By some metrics it's very successful! You've got investors who value your company at $10 billion, and they're ready to loan you $1 billion.

But also, you've never made a profit, you've tried the most likely routes to profitability without success, similar businesses have suffered sudden implosions, and you'd have gone bankrupt years ago if it weren't for the fact there's chumps who'll work for you for free. You've got no plan for how to pay back that $1 billion.

But so long as you keep your mouth shut about that last paragraph, and accept the $1 billion? You get a high-status job, a thousand-person empire, and a fat compensation package. 1000 of your colleagues stay employed too.

Why would anyone climb off that gravy train?


For reasons the above commenter has already mentioned.

Most founder either want to run a profitable business or make their way to a big exit. Company valued at 10b? Great, sell or IPO. If the founder really realized that there was no profit ever to be made and genuinely didnt believe in the future of the company then the best move for their own self-interest would be to sell and gtfo.

Not only if your premise impractical, but it's inherently illogical for your antagonist to act as you imagine.


As someone who has been acquired prior to profitability a few times, there are subtler variations at play here. There may be avenues to profitability that the founders have no stomach for. Their sense of ego will let them do a lot but there are things they won't do. There is some honor among thieves and being willing to exploit people is very different from outright abuse. There are much worse people out there in the world to have as a boss. This sort is merely run of the mill.

The new owners have a different threshold, which you are about to learn. The exchange of money is fresh in their mind, they don't have a history with the employees, and they have a story in their head about how they can turn this money into a pile twice as high. They may well get the company to profitability, but if things were a little ridiculous before they may be a proper circus now.

There's some magic algorithm they use for figuring out options and vesting periods to retain staff, and there's some inflection point where if you stay this long then you get the most extra money per month. This is the trap they have set for you. It's practically an optical illusion, which you can only see once you've stepped past it. You will underestimate the relative value of month 3 versus month 13, and you will forget that 6 months may be worth $6N dollars but 5 months is worth $0. And so with one month to go you will be willing to put up with 4x as much bullshit, forgetting that most of that extra money was already buying the last 5 months. In effect, double spending your bonus. The moment you leave you will wonder why you didn't do it ages ago, even as you are spending the cash.

If it seems like you should stay for 2 years, then you will probably be happier with 1. If 3 years, then 2 years, maybe 18 months if there are increments smaller than a year. Of course, nobody can really tell you this, because your brain will keep asking you 'what if I had stayed' until it's happened to you at least once. But I think maybe you can tell people not to repeat that mistake. Trust your instincts from last time. This one won't be different.


False dichotomy. “No profits ever” and “unicorn IPO” are not the only two options and if the latter isn’t on the cards, “fat executive compensation and cocktail lunches while pretending you’re on the way to the latter” looks pretty good.


If the first commenter is naive, then this is even more so.

Most people love to be looked up to, loved, payed nicely. They won’t quit.


> Why would anyone climb off that gravy train?

If I actually believe I can't deliver the stress alone of feeling like a fraud and waiting for the hammer to fall seems worse than the interim reward for me, but I can certainly see people built different who would tolerate that. I think more likely is that people let themselves be convinced it will work out during the process of convincing investors.


I'd say this may be an overly cynical view of founders even. I don't think they're usually that calculating; my suspicion is that most of them believe their own hype and really do think their business is going to be the next Google. I'd say Adam Nuemann epitomizes this to me. He could be a calculating cynical genius, but he just seems like someone who in another time would have been either a rockstar or religious guru, but managed to hook up with a different crowd and everyone got caught up thinking an office sub-leasing company was some profound invention.


> Why would anyone climb off that gravy train?

Decency?


The decent would have been selected against at the paragraph-about-which-one-keeps-one's-mouth-shut stage, if not before.


A University near me has a Master of Business Administration (MBA) course that includes "Corporate Ethics".

Can you believe it? The audacity! A business charging business people, for a course of knowledge both sides know they will never use...


Ha ha ha


> You've got investors who value your company at $10 billion, and they're ready to loan you $1 billion. […] You've got no plan for how to pay back that $1 billion.

This suggests either:

- the investors [think they] know something you don’t know

- the investors are profoundly careless with their investment

- the investors don’t exist, this is completely implausible

- you’re actively misleading the investors about that paragraph, not just keeping your mouth shut


> - the investors are profoundly careless with their investment

Sounds plausible to me.

For example, consider WeWork - the CEO was literally buying office buildings with his shares, and having the company rent them off him. These investment funds are astonishingly careless.


I’m having a hard time finding information about this in the sea of other WeWork drama, so I can only speculate. This sounds like it easily could overlap with one or more of the other categories I listed.


I'm sorry... what?

This scenario you've described is such a stretch that in all likelihood it has never happened.


Startup founders don't care about providing careers per-se, but they care about the clout and connections that comes from founding a company with 100+ engineers mentioned at every cloud provider's conference as opposed to a scrappy garage with 3 greybeards hacking away Perl code running on rusty bare-metal.

CTOs and engineering managers care about their own visibility at said cloud provider's conference so it gives them bonus points for their next gig.

Individual contributors want to take the place of the aforementioned engineering manager/CTO so they'll double-down and learn the current stack and "best practices". They may not even be aware that there's any other way or how unefficient it is, as they've started their career during this madness.

If any of those people have stock options, keeping low and going with the flow is a sensible strategy if they don't want to lose them by getting pushed out for speaking against the groupthink.

There's rarely explicit malice involved at any specific stage - it's a market-level problem. The music will stop at some point though and we'll see a readjustment.


>they are chasing huge exits and being a founder is incredibly stressful.

Are they really? I think they're workaholics chasing an escape. True success is the worst outcome they could imagine, because then they'd have to tend to the rest of their life. Why be a serial founder, if its so horrible and stressful? Why keep working yourself to death after your first $10M, $100M, or $1B? Why is it never enough?

People have self-destructive instincts that are disguised in socially acceptable (or worse, praised!) forms. We shouldn't ignore the actions of founders when trying to make sense of their words.


What is V=f(s) ?


View = Function(State), the original idea behind React. I use React every day, and I enjoy it, but it hides nightmarish complexity to preserve a semblance of this original, neat and tidy idea.


How do you mean that this causes 'nightmarish complexity'? I guess it's true if you measure the amount of complexity that powers the framework, but the visible API's, the resulting project structure and the implementation itself is incredibly simple and versatile.

I'd say it's one of my favorite properties of component frameworks as opposed to raw element modification, templating solutions or classical MVC.


I'm not criticising the idea or the framework, but the react team have had to go to enormous lengths to make that one simple idea work. And the community have had to embrace (and then sometimes, forget) some pretty controversial techniques and concepts in that time, with more big changes coming soon (hello concurrent mode!). It has cost FB tens of millions of dollars to develop. I'm not saying that investment hasn't been worth it, but peek under the hood and the complexity is huge.


Alright, I agree completely. But I still don't see how it causes the same issues as microservices does. It is definitely conceptual crack (which is a genius term btw), but it isn't an ominous foot gun the same way the allure of microservices is.

EDIT: Oh I get it, you aren't talking at all about the consequences for the implementers. Only the massive effort required to turn the alluring V=f(S) into the React we have today. Got it.


They didn't say it causes the complexity, just that React hides a lot of complexity away. React is very simple and easy to use on it's surface, but it an incredibly complex collection of software under the hood.


It's funny, before I knew about React (this was ~2014), I wrote a ~7ksloc webapp with jquery and underscore that also just used memory for state and triggered a re-render of things when inputs changed. I haven't dug into how React works at a deep level lately, but at the time, for my relatively simple thing (basically did a few pivot tables worth of calculations for some financial stuff), it wasn't really that complex. I'm sure that enough engineers working on a framework for a decade can make it very complex, just wondering, as I do with many mature pieces of software, if a ground-up rewrite based on the same concept but knowing what they all know now would look drastically different from the latest dev build.


"Startup founders do not care about providing careers for engineers, and usually want to stop being a "startup founder" as soon as they possibly can - they are chasing huge exits and being a founder is incredibly stressful."

If that's true, I'd like someone to explain Craigslist.


Not a Startup, didn't follow Startup trajectory. They were building a tidy little business - except they moved past the little part.

When you start, you choose between investors+growth=exit or profitable+steady=tidy

Greatly simplified of course but Craigslist chose the second path.


"Not a Startup, didn't follow Startup trajectory."

Every company was once a startup.


Disagree. A Startup is a company designed to grow and scale fast (according to my investor friends)

A small business is one designed to be tidy.

All business start but not every one that starts is a Startup.

Include tangent here about Lean Startup vs Startup.

Also, I'm using Startup which is different than startup - the Proper Casing I think implies the Venture-to-Exit part.

https://en.m.wikipedia.org/wiki/Startup_company


An outlier that predates the modern SV scene


meh, I think its propaganda from major cloud providers. its hard to do microservices well without all their tools which is $$$


I believe this to be reasonable, though I believe there's an even stronger force at play: engineering retention. In other words, let engineers build whatever they want to build, within reason of course, because if they don't they may just go to another company that will let them. Solving for boredom. Complex systems are interesting.

I'm going to flip the narrative a little bit: whether its "tech is the end-game" or "engineers are bored and just want to have fun", I'm not sure I see the problem. I've grown out of the "maximally efficient business is the endgame" propaganda. My endgame at work is to have fun and produce enough value so I can come back tomorrow and continue having fun. Business as the endgame is a great motive for the CEOs who make millions a year. I'm not that.

Put another way, many people say that coding is as much art as science or engineering. Artists may do commissions, but ultimately their work comes from within; its not by-and-large prescriptive. The endgame is the art; the endgame is the tech; and, hopefully, that endgame is marketable. Sometimes it is, sometimes it isn't; that's not especially relevant to the process.

Or, you can take the stance that the endgame is the business, and spend your life unfulfilled, making your rich boss richer. I'd rather seek fulfillment from the tech; not the money.


Technology, viewed in the traditional sense, is a means to an end. In the context of farming, for example, the plough and its later improvements weren't the endgame--the increased yields were.

I often equate a business to a ship. It's dynamic, navigates obstacles, and is captained by leadership.

Perhaps the two examples are extremes on a spectrum. And inside is the option to learn more about the ship, so that the right problems can be resolved and solutions found, to help it reach its destination. And perhaps fulfillment can be found there.

In fact, in doing so, sometimes, new courses become realized. Personally, I find getting one charted very fulfilling.


This form of nihilistic cynicism is so dull because it's non-falsifiable.


Generally agree, but it's not cynicism, it's pessimism. Specifically about human nature - e.g. that people will encourage complexity simply to protect their jobs. I think, at worst, people will NOT make an extra effort to reduce complexity. A sin, but a small one in the scheme of things.


I don’t think any programmers are worried about job security right now. But I would say that having advanced and shiny tech is a very compelling feature to a job. I’m seeing generic Wordpress/php agencies really struggle to retain developers because no one wants to work on wordpress or php because it’s not a long term career. So places which do the exact same websites in Rails and React are doing better because even though wordpress might be faster and easier, you have to pay people more than you would if you have cool tech.


I feel like we're on the brink of nuclear war, and I'm thinking about asymptotic behavior of webapp stacks...and I'm okay with that!

I would guess that both PHP and Rails/React (and Java, C#, Python) folk will have all the work they can stomach, if they can stomach it. Consider how in-demand COBOL programmers are these days!


>Consider how in-demand COBOL programmers are these days!

I keep hearing this but I have only ever heard bad things from the people who work there. Considering how in demand all developers are right now I wouldn't even think of getting in to some obsolete tech. You'd have to pay me enough to retire quickly.


I don't believe it's malice per-se. The market sets the tone and everyone has to follow or be left behind. I've mentioned this in another comment: https://news.ycombinator.com/item?id=30760146


it's nihilistic if you find meaning in microservices, and cynical if you don't think there's a weird tech VC bubble..


I think it's pretty absurd that you could in/validate many startup hypotheses with only a few hundred grand. You can maybe pay for two engineers (plus overhead) working for a year for that price, but you'll also need someone to do your accounting, legal, product, etc work.


Falsifiability doesn't govern truth.

If I claim you felt angry when typing this, can I falsify it?


What do non-falsifiable statements like that lead to? Where does the discussion go from there?


You can make persuasive arguments about non-falsifiable points. Just because you can't form a proof or completely demolish an argument doesn't mean its uninteresting. Much of the social sciences, politics, or the real world in general relies on useful but non-falsifiable argument.

If you confine yourself to falsifiability, you will never be able to understand people.


> You can make persuasive arguments about non-falsifiable points.

You actually can't.

Like, you can make _arguments_ based on non-falsifiable points. But if the points aren't falsifiable, then there's nowhere for the discussion to go. How can I respond to them? What can I do to counter your arguments? Nothing. It's dull.


It is time intensive but not particularly hard to collect data from job postings and types of exits start-ups eventually make. I find it hard to get a good quick estimation since it is hard not to think of key examples that fit my personal expectations.


I think there's another hidden element: the majority of people in tech love wrangling with complexity on computers.

Why else would people spend hours trying to configure and customize linux distros?

To them it's a puzzle game and they very much enjoy the challenge of solving these puzzles. Having a dynamic system with many moving parts that all need to be configured just the right way so they finally fit together and come to produce the desired outcome.

This is an "epiphany" I got from playing "The Witness". I spent more than 50 hours playing the game, and I'm not even 10% finished. The puzzles in the game are original and require high level complex thinking to solve. But at some point it just got frustrating to me. I wanted to play games to sort of relax or perform low key mental activities. But this game wants you to spend a lot of mental energy to solve puzzles that seem arbitrary and pointless. The feeling I had when I was playing this game was very similar to the feelings I had when I was trying to wrangle with confusing Docker configurations.

That's when it hit me: people who love to play with Docker configurations treat it like puzzle games and they enjoy every bit of the mental effort it takes to get things just right for the system to work. It doesn't bother them that the system is fragile or over complicated, or that the mess they're building is hard to maintain.

Of course, I kind of empathize with that because that's also what got me into computers and programming in the first place. But to me, the complexity I want to deal with is in the code. Once I write code that solves a problem, I don't want to then struggle to get the code running. I want to just compile and run with one command. I want all the complexity to be contained in the code and to keep the environment simple.

But if your jobs is DevOps, you don't get to solve hard problems in the product's code base. So instead you solve hard problems in the environment that the code executes in. So you thrive in the complexity of microservices and dockers and all that buzz.

In other words, people in tech love solving hard problems, and if you don't give them hard problems, they will invent them.


Very well put something that I have been observing for the past ~5 years.


> The reason why "we" are doing this is because in a lot of cases, the tech is no longer used as a means to solve a business problem - instead, the tech is the end-game itself and complexity is a desired feature.

I have also talked about something similar before [1].

A company I worked for, after receiving way more money than they needed from Softbank, suddenly had to hire as many people as they could as a condition imposed by the investor. This got to the point we had large teams working solely on portions of a page of the application, even though the content (not only the look) of the website didn't really change for a few years. The constant rewrites and programming language changes (three full rewrites of the whole app) meant that there was an endless stream of work even for teams that only controlled half of a settings page.

Like a friend put it, "your 8-person team is a 3-day job in a normal company".

Even with the pandemic bringing customer numbers to almost zero, the website was still too frail to stay up. And the solution to that wasn't introspection about how the architecture was shit due to Conway's Law. It actually ended up in another rewrite using even more complexity and more division.

[1] https://news.ycombinator.com/item?id=30397934


Is this what it's like down in SV? You folks have enough money flying around willy-nilly to just build out untested architectures at a whim for unestablished businesses?

In the companies I've worked for architectural discussions are taken extremely seriously, we research and discuss how different approaches to solving problems has turned out for other folks and we might even run a pilot or two where we do a small scale demo to feel out the pluses and minuses.

Complexity without sufficient justification can't survive in most established businesses - a research budget certainly can, and you can use that research budget to explore options - but if you want to rewrite the codebase into a different paradigm you need to clearly demonstrate what we'll be getting out of it. That might be performance, maintainability, hireability (i.e. most local new grads are trained up in this paradigm), feature support or something else, but you need to show the benefit - I think it's a good idea to be skeptical of historical decisions, a lot of decisions early on in a company's history are made arbitrarily and sort of assumed because the momentum is on their side - you definitely should fight against that momentum and reject bad conventions but... switching to a hip new tech just for complexities sake - I have no idea how I'd ever sell that to higher ups as a project initiative and I've sold executed two framework switches (no framework to a now defunct framework to a now healthy framework) and several major data model changes (one that took about ten man years of labour).

When you've got a green field you're making decisions left and right, and some of those will be made arbitrarily - but the big commitments should be well contemplated in advance.


Yes, most recently, any barnacle that can be stapled onto the monstrosity that is the Modern Data Stack. Before then, for Kubernetes. Before then, for the Hadoop ecosystem. (These all have value -- even we work here -- but the ecosystems can't possibly support all the co's in each.)

For whatever the things of the year, a VC team will fund say 20 startups with the hope that 1-2 do well.. and are ok for the rest to flop. It's expected that 18-19 we're bad ideas, but they all still get giant sales/marketing/engineering staffs that 99% co's won't in order for the VC to have it get a go and then find out which is the winner. That is why you especially shouldn't trust speakers, marketers, and sales staff from a VC co: the financials show it's 95% chance to not be as advertised and then disappear on you in some financial shell game.


>but if you want to rewrite the codebase into a different paradigm you need to clearly demonstrate what we'll be getting out of it.

Well, that all seems reasonable until that upstart competitor comes along and upends the market with simpler stuff that doesn’t require so much bondage and discipline and tithe to process religion suppliers, and the customers start to question the value you’re supplying and wham, all that orderly process and framing is like out the window, man, like man the lifeboats. It’s a pain to stay in the game when it seems like change for changes’ sake and there’s no time to sit down and reason through a business case before the next new shiny hits. Stupid but true.

I still remember Digital trying to sell me maintenance contracts for 20 year old PDP-11s still in service out in the field. We rebuilt our entire infrastructure from scratch on new software and hardware in less than a year for 2/3rds the cost of the annual maintenance contract...the next year those pretty wood paneled Digital offices with executive dining rooms had "for lease" signs out front. Digital had failed the paradigm shift.


This take seems to be predicated on the idea that VCs are out there funding teams just because of the technical complexity of their codebase? This just doesn't seem like the reality to me.

If it can be done better for cheaper, VCs will tend to fund that team.


I guess the question is then, would VCs give money to startups who say "We're using JQuery and PHP" or to startups who say something like "We're leveraging AI and machine learning to deliver product via MERN stack ontop of AWS to give us web scale architecture"


VCs care that an early stage team will be able to execute, and to a lesser extent be able to innovate. Telling them that you use a 20 year old tech stack that few junior (aka cheap) engineer's know or want to know doesn't inspire this confidence.

You would get a similar read from VCs if you said that you use Haskell, Erlang, or up until quite recently - Rust.

There is something pretty comforting to an early stage investor of "We use Java microservices on mainstream cloud provider". Says that the team isn't ancient, isn't avant guard, and that they will be able to hire people. Worst case, an acquirer won't mind buying the leftovers or acquisition hiring the team.


I think this is a really warped view of VC funding who sorta kinda in a passing sense care about the tech stack but where the focus is almost entirely on the business side. I was at a company that bought a tiny little (originally) VC funded shop that actually was maybe 1K lines of PHP total. The secret sauce was they found a desperately underserved market that just needed something to collect and organize data between unconnected parties that took hours to do manually. Damn thing sold itself because it cost nothing to run, had essentially no maintenance costs, and made their users about 10k/user/mo from the hours it bought them back.


Businesses like the ones you speak of are gems in a sea of crap. They exist, and in that case tech stack definitely isn't important.

But for the rest of the "crap", AI- & blockchain-powered crap is more shiny than other, more boring crap, at which point the tech stack might be more important.


>"We're using JQuery and PHP"

If their MRR/ARR/MAU/DAU or whatever are strong then absolute-f'n-lutely.


Do VCs care or even know about tech stack a startup uses/intends to use?

AI and ML is one thing - it can mean a product that couldn't exist without it. There's nothing about MERN vs PHP that indicates that though.


I remember quite some years (somewhere between a decade and a decade and a half) ago having to do some truly horrific things to make things work on AWS that would've been trivial and far cheaper at the time if we'd just bought a decent server and stuck it in a colo somewhere - specifically because my customer's investors were all-in on AWS as the only possible way to do things.

I'd note that today putting that workload on AWS would probably be a pretty pleasant experience - but yeah, sometimes investors absolutely -do- care in ways that you'd really rather they didn't and you just have to suck it up.


> If it can be done better for cheaper, VCs will tend to fund that team.

That might be true if there was a shortage of VC money so they had to choose their investments very carefully. The reality is there's a huge amount of money sloshing around with few good investment opportunities.


Before we invented terms like "microservices" and "service meshes" and even SaaS and PaaS, the phenomenon was just called, "Dazzling them with bullshit."

Tony Hoare used his Turing Award speech to state the following:

    There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
Microservices were meant to be simple encapsulation but if you create enough crosslinks in a network of very simple services then rather than removing the weight you've just pushed it out of the nodes and into the edges. And since those edges don't exist when the system is at rest, they are dead awful to analyze and reason about. There are no obvious deficiencies.


If it sounds easy VCs will not fund it, even if it's actually hard.

If it seems hard and has buzzwords then VCs will fund it; they use difficulty and buzzwordiness as proxies for being on the bleeding edge of technology because they think that's how companies succeed.

The only reason blockchain and ML for example are trending is because VCs will pump money into companies that use these buzzwords.

One of my previous employers really wanted to use AI for a mundane problem that the engineering side provided simple solutions for, but the "founder" kept trying to push for AI from any angle he could imagine. He didn't tell us this directly, but to me it was obvious that for him AI was a gimmick to get VCs to pump money into his company.


"He didn't tell us this directly, but to me it was obvious that for him AI was a gimmick to get VCs to pump money into his company."

This is exactly the kind of thing you should tell the engineers. Get them on board, straight up.


Do you suppose it's a sort of impostor syndrome? They're afraid they aren't smart enough to understand what they're paying for and so they just keep pushing the doubts down and writing the checks? Or is it full Dunning Kruger?


It's what Peter Thiel calls indefinite optimism. They have no idea what's going on but they believe there's money to be made in tech. They follow tech news to try and figure out what type of tech companies make for good invesetments.

It's a mimetic society.

https://www.luca-dellanna.com/mimetic-societies/


Do they pick up cheaper solutions, or the faster and bigger aiming with more marketable pitch?


More marketability definitely helps with passing it on to the next sucker, so it's a valuable feature that shouldn't be overlooked. The next VC is likely to be more interested in the first AI- & blockchain-powered application running on AWS InfiniDash than boring Rails on Heroku & Postgres.

Cheaper solutions would be a good pick for something that has a very good chance of success and thus doesn't need a "next sucker" nor media attention, but those are very rare as typically it would be bootstrapped and not involve VC to begin with.


My mental model: The VC model is about making a spread of bets where at least one of them will produce a huge return, so the average investment will be in a company that's irrationally overambitious.


It's not that VCs want complexity per se.

VCs and CEOs, a lot of times, want more developers working at companies, because they believe that's how they're get velocity to experiment, ability to grow and ability to change course quick enough. Up to some point that's true! The issue arises when having lots of developers requires having too many isolated teams. That's when Conway's Law kicks in: a company with lots of isolated teams will probably have complex architectures to make the teams work, most probably using (micro-)services. Hence, complexity.


To this day I can't quite fathom how 300 developers work on the same thing. Just about the time I accept that I am alone in this opinion, someone like Discord shows up proving the sorts of crazy force multipliers you can get from The Right Tool for the Right Job.


> To this day I can't quite fathom how 300 developers work on the same thing.

Yep... As someone who's been in this situation a few times: after a certain number, very badly. Things get done, but in slower and messier pace than at a smaller company.

The diminishing returns law kicks in together with sunken cost fallacy. With 10 people you're slow, with 50 you're fast, but then there and you're forever "chasing that high" of velocity increasing with headcount.

There's no way to escape what Brooks wrote in Mythical Man Month.

Unless there is a very clear-cut division of labour, like 20 teams working in 20 different libraries that don't talk to each other, there is no free lunch when it comes to team communication.


Even with four teams, I've found a lot of instances of people building the same thing. That wouldn't necessarily be bad, except often those 'things' fall into a space of errors that almost everyone makes until they learn/are taught to stop doing it.

So you notice that there are 2.5 implementations of a bit of logic because there are bugs, maybe security bugs, that cross 2 of them or are different on each.

I have found that once the code exceeds the size of one brain, it accelerates because if I don't know code exists, I will write it again. Like driving down a mountain side with the brakes on the whole way, eventually the sanity leaves the system and you go careening down the mountain.


> providing careers to unreasonable amounts of engineers with no end in sight because you're too busy over-engineering

This is so spot on. The amount of unnecessary complexity is becoming ridiculous.


I'm building out two step function flows with numerous lambda functions each to process a NACHA file (direct deposit). Something that has been done for decades in a single console app. The CTO had already made up their mind that this was how it was going to be before we even started.

It's kind of embarrassing but whatever, we are going to bill a few million to get it done. And that's just my small business logic team. It doesn't count the two or three user interfaces.


I don’t actually think this is where it comes from, necessarily. In my experience, most of this comes from the engineers themselves. Usually business just wants a CRUD app but the engineers want a distributed cloud microservice extravaganza, partly to keep themselves entertained and partly to pad their resumes.

But you don’t get upvotes for saying this on HN.


After having reviewed 300+ tech stacks, I can say with certainty there are use for AND against microservices. The benefits are numerous, they just tend to be overused and not implemented consistently the appropriate way.


That was an interesting insight that I haven't read before, thank you for sharing that.


I hate to tell you, but new technologies like react and microservices are very useful. Top tech companies regularly design whole systems using microservices. Its the legacy companies with legacy tech stacks that fail at it.


I’ve never said that those technologies or processes have no use at all.

However, in a lot of cases they could be considered premature optimisation.

A bulldozer is also a useful tool, but in the real world nobody uses one for small jobs better suited to a shovel because of how expensive it is.

In the tech industry however, VCs will be happy to bankroll the bulldozer for you so that it becomes cheaper than using the shovel. All else being equal, I too will pick the bulldozer at least as long as I’m not the one doing the maintenance it will inevitably require.

Worse, give it enough time and the skill to use shovels will disappear, and now everyone will be using bulldozers for even the smallest jobs, with all the negative externalities attached to them. The only winners are the bulldozer manufacturers.


New? Are you sure?

React is about a decade old.

Microservices is between two and seven decades old, depending on your definition.

None of this stuff is new.


This is a good point. But actually, I would argue that people against these technologies still claim they are "new".


is it even relevant? usually people who are against microservices or react are against those not because they are “new” but because usually those are incorrectly chosen tools for the job.


React is a screeching infant that has jaundice and a few other yet undiagnosed terminal illnesses.


Before microservices were a thing, I had the chance to work on a couple of telecom systems written in Erlang/OTP, but it wasn't until years later that I realized we were already doing most of the things people were using microservices for, with the single exception of being polyglot (although Elixir and Gleam are starting to challenge that).

Small teams were dealing with specific functionality, and they were largely autonomous as long as we agree upon the API, which was all done via Erlang's very elegant message passing system. Scalability was automatic, part of the runtime. We had system-wide visibility and anyone could test anything even on their own computers. We didn't have to practice defensive programming thanks to OTP, and any systemic failure was easier to detect and fix. Updates could be applied in hot, while the system was running, one of the nicest features of the BEAM, that microservices try to address.

All the complexity associated with microservices, or even Kubernetes and service meshes, are ultimately a way to achieve some sort of "polyglot BEAM". But I question if it's really worth it for all use cases. A lot of the "old" technology has kept evolving nicely, and I'd be perfectly fine using it to achieve the required business outcomes.


Winner.


I found microservices had the benefit of increasing release cadence and decreasing merge conflicts.

Are there complications? Sure. Are they manageable? Relatively easily with correct tooling. Do microservices (with container management) allow you better use of your expensive cloud resources? That was our experience, and a primary motivator.

I also feel they increase developer autonomy, which is very valuable IMO.


Decreasing merge conflicts sounds more like muting and/or deferring problems.

Microservice fanaticism seems to be coupled with this psychosclerotic view that world can exist in state of microservices or as monolith.

From what I've seen in last 20+ years, if I had to pick one sentence to describe fit-all enterprise setup (and it's as stupid as saying "X is the best" without context) - it'd be monorepo with a dozen or two services, shared libraries, typed so refactoring and changes are reliable and fast, single versioned, deployed at once, using single database in most cases - one setup like this per-up-to-12 devs team. Multiple teams like this with coordinated backward compatibility on interfaces where they interact.


To clarify, are you saying up to two-dozen services for a development team with ~12 developers on it?


Above all I'm saying that sentences like "microservices are better", "monoliths are better", "42 services are the best" are all stupid without context.

What your business does, how many people you have 3 or 10k, what kind of roles and seniority you have, how long you're in the project - 3 months in or 10 years in, how crystalized architecture is, at what scale you operate, how does performance landscape looks like, what kind of pre-deployment quality assurance policies are dictated by the business, are offline upgrades allowed or we're operating in 24h, which direction system is evolving, where are gaps (scalability, quality...) etc are all necessary to determine correct answer.

Building website for local tennis club will require different approaches than developing high frequency trading exchange and both will be different from approaches for system to show 1bn people one advert or the other.

Seeing world as hotdog and not-hotdog (microservices vs monoliths) makes infantile conversations. There is nothing inherently wrong with microservices, monoliths or any of approaches to manage complexity ie:

- refactoring code to a shared functions

- encapsulating into classes or a typed object

- encapsulating into a modules

- simply arranging code into better directory structures, flattening, naming things better, changing cross-sections ie. by behavior instead of physical-ish classes and objects

- extracting code to packages/libraries inside monorepo or its own repository ie. open sourcing non-business specific, generic projects or rely on 3rd party package/library

- extracting into dedicated threads, processes, actors/supervisors etc.

- extracting to service in monorepo or dedicated repository or creating internal team to black box it and communicate via api specs or use 3rd party service

...bonus points for:

- removing code, deleting services, removing nonsensical layers of complexity, simplifying, unifying etc.

etc


I don't know what the op intended, but services can be deployed in process inside one monolith.


What is "correct tooling"? I haven't found anything that's remotely close to providing a nice stack trace, for example. How "micro" are your services? Can it manage to stand up a local dev environment? How do you deal with service interface versioning? Is this great tooling vendor tied?


On the stack trace, I think this is what the modern "observability" stuff is all about; traces, wide events, etc. One event per request. DataDog will say they do this, Honeycomb will say they do it and that DataDog is kinda lying, now there's OpenTelemetry, it's a deep rabbit hole.

It's easy to say this is a lot of work to reinvent something you get for free with a (single language) monolith, but at least it's recognized as a problem worth solving.


The stack trace bit is hard. For local development, ideally there's some fancy service discovery where your local service can hit a development instance of another service if you don't have a version running locally.


Given sufficiently carefully designed logging (with a request id that starts at the outermost service and gets propagated all the way through) you should be able to see the equivalent in the logs from a development set of services when something goes wrong. Pulling a full request's logs out of production logging is a bit trickier.

For me, it boils down to "this is absolutely doable but I'd still rather have as few services as possible while still maintaining useful levels of separation" - at least for the primary business services. Having a bunch of microservice like things serving pure infrastructure roles can be much cooler depending on your situation.


I agree that such a system could be designed and built. I just haven't seen any tooling that provides it, the next thing to free, like modern languages do. As far as I can tell, you have to have an experienced developer craft the system such that these features work. They don't come out of the box with any tool set I've seen, and I'm still looking and asking.


Lightstep is pretty cool and will show you something like a stack trace across systems, with timings, but it's not for free (monetarily, or dev cost to integrate it into your stack)


Yes, I see. I tried to understand their pricing. 10,000 active time series? Does that mean it will store and let me view 10k top level API calls for their fee? 10k separate services? I don't quite understand how this maps to actual use.

$100/service. Is that top level service? To cover a 100 endpoints at one service each is going to be 10k/mo?


I believe the 10k time series is how many complete end to end traces (across multiple services) it will store at once. Lightstep has settings where you set sample rate, retention, which traces you want to collect etc.

I believe $100/service is top level service, not endpoint. But really not sure.


I agree. I think organizational scalability is an important benefit of microservices that doesn't always come up in these discussions. Having smaller, more focused services (and repositories) allows your organization to scale up to dozens or hundreds of developers in a way that just wouldn't be practical with a monolithic application, at least in my experience (I'm sure there are exceptions).


There are techniques to allow large teams to work on monoliths together. They take planning and discipline, but overall I would say are far more reliable than microservice explosions for similar sized systems, because the earlier you manage the integration the less work it is. ie what you pay at source integration time is less than what you pay dealing with deployments, infrastructure and especially support across distributed systems which can get real expensive real fast.

I've worked on multiple systems with around 50 developers contributing fulltime to them, very practically.


If a merge conflict occurs, I'd much rather hear about it from git instead as opposed to some obscure breakage later in testing or production.


The merge conflicts tend to be things like library updates or large scale refactors. It's massively easier to update a core framework 20 times than it is to do it once on a repo 20x larger due to merge conflicts and the minimum possible work being 20x larger.


If you're going to complain about something you need to present some data that backs up your point, this is just a bunch or rambling opinions.

A lot of software engineering is about managing modularization, I've lived through structured programming, OOx, various distributed object schemes and now this. Basically all these mechanisms attempt to solve the problem of modularization and reuse. So, the fact that new solutions to the old problems appear just means that it's a hard problem worth working on. I'd say use every single technique when it's appropriate.


From the article: """

If we think about the fastest way to execute some code, that is a function call, not a web request.

If we think about the best way to make sure we detect problems at compile time, that is by using a library at a compiled language.

If we think about the best way to understand why something failed, that is a full stack trace.

If we think about resilient systems, the most resilient systems are the ones with the least number of moving parts. The same is true for the fastest systems.

If we think about deployment and management and upgrades, the simplest systems to deploy and maintain also have the least number of moving parts. """

You don't really need any data to back this up, it's all self-evident.


I'm not for or against microservices, but that seems like quite the cherry-picked list. There are arguments on the other side too, such as...

If we think about the easiest way to swap out pieces of functionality, that is by using well-defined interfaces, low coupling, and separation of concerns.

If we think about the easiest way to scale parts of the whole independently, that is by similar mechanisms.

If we think about the easiest way to build in fault-tolerance, that is by distributing work across failure points.

And so on...


But well-defined interfaces are not specific to microservices (ms). Low coupling, high cohesion, separation of concerns and well defined interfaces are simply system architecture. Which is easier to work with without the infrastructure overhead of loads of semi-independent ms. Ms is a net loss here because your complexity budget is consumed by the infrastructure instead of the system architecture.

Fault tolerance again is largely orthagonal to ms. It's a matter or system architecture, see the point above. e.g. A common situation where service one depends on availability from service two, having them as seperate ms doesn't help, what helps is architectural design to keep service two useful somehow regardless, and this is orthagonal to if the services are in process or done as ms.

Scaling is a whole other discussion, and microservices can have good impact here, but so can other techniques.


> If we think about the easiest way to swap out pieces of functionality, that is by using well-defined interfaces, low coupling, and separation of concerns.

Microservices don't enforce good practices and monoliths don't prevent them either.


Right, but some of those desirable system properties are much easier and lower friction in microservices.

And others in monoliths.

Each has its pros and cons.


> While microservices talk likes to pretend the solution is some horrific “monolith”, we never really had “monoliths” before in development that I experienced. What we had were some kinds of tiered architectures.

I've worked with with monoliths. The author must not have experienced them. I've worked places that had builds that took hours to run. We had git merges that took days. We had commit histories that were unreadable.

The developer experience working with it was one of CONSTANT frustration. The system was too big to make large changes safely. Incremental changes were too incremental and costly.

Note nowhere in here am I saying that microservice architecture should always be preferred. But the idea that its all just some sort of trend with no real underlying advantage is sort of silly.

Every company I've ever been at with a monolith tends to have "untouchables" of architecture and the original design schematics who understand the system orders of magnitude better than anyone else. That doesn't scale, and really messes with an engineering organization.

There's conways law where software will eventually reflect the organization structure of the company, but there's also a sort of reverse conways law - when you have teams dedicated to specific services you also get to be able to target investments in those teams when their services are not executing well enough.


Yeah I agree, it all breaks down when you need to make large scale changes. Something like updating a core library becomes virtually impossible because there is no half step. You have to fix _everything_ before you can merge and that takes long enough that the merge becomes horrific. I was told that updating Rails at GitHub was a multi year project involving building a compatibility layer so the app could run on both versions at once.


If you're going to embrace microservices, you need to be VERY confident that they solve real problems that you currently have and that they will result in an improvement to your engineering velocity within a reasonable time-frame.

The additional complexity - in terms of code and operations - is significant. You need to be very confident that it's going to pay for itself.


I have been around for a while too and I think I can answer the rethorical question: it's a great fulcrum upon which to build teams and spring careers, and by the time problems have calcified there's been enough turnover or promotions that the reason why they are in place is completely lost. I do not say this with bitterness accumulated while building them: on the contrary, it's something I've usually realised only much later, when it was too late (and more than once).


Incompetent teams and engineering organizations will find a way to mess up both monoliths and microservices. Great ones will pick what works best for their specific use case and be effective at it.

The only correct answer is to not waste time with the decade+ worth of pointless internet debates on the topic.


There's a degree to which I agree with this, but the advantage monoliths have is the "opinionated" frameworks (chiefly Rails, Django and the like) that hand-hold a less competent team towards a sane design.

In comparison, building a good set of microservices is a minefield of infinite possibilities, with each decision about where a particular responsibility or piece of data should live being quite significant and often quite painful to change your mind about.


> If we think about the fastest way to execute some code, that is a function call, not a web request.

No, the fastest way to execute some code is a goto. Be careful with arguments from performance, that's how you get garbage like a former colleague's monstrous 10k SLOC C(++) function (compiled as C++, but it was really C with C++'s IO routines). Complete with a while(1) loop that wrapped almost the entire function body. When you need speed, design for speed, but you almost always need clarity first. Optimizations can follow.

> If we think about resilient systems, the most resilient systems are the ones with the least number of moving parts. The same is true for the fastest systems.

I suggest care with this argument as well. This would, naively interpreted, suggest that the most resilient system has 1 moving part (0 if we allow for not creating a system altogether). First, this is one of those things that doesn't have a clean monotonically increasing/decreasing curve to it. Adding a moving part doesn't automatically make it less resilient, and removing one doesn't automatically make it more resilient. There is a balance to be struck somewhere between 1 (probably a useless system, like leftpad) and millions. Second, there's a factor not discussed: It's the interaction points, not the number of moving parts themselves, that provides a stronger impact on resilience.

If you have 500 "moving parts" that are linearly connected (A->B->C->D->...), sure it's complicated but it's "straightforward". If something breaks you can trace through it and see which step received the wrong thing, and work backwards to see which prior step was the cause. If you have 500 moving parts that are all connected to each other then you have 500(500-1)/2 interactions that could be causing problems. That's the way to destroy resilience, not the number of moving parts but the complex interaction between them.


Microservices work well if your contracts are well-defined, domain knowledge is limited, your team is under the size of a pizza, and your platform needs are diverse. Eg: Some small teams prefer containers, other prefer managed containers (serverless), others prefer small VM's.

SOA works well if your teams are larger and have larger domain (or end to end, however you want to call it) knowledge.

Monoliths work well when the domain of the application is singular, the team is large, or if you're in prototyping. The big downside for monoliths is that their scaling model must be considered in advance or engineers can tactically corner themselves with architecture. That incurs big, expensive rewrites as well as time.

While Conway's Law may be reflective of the enterprises use of (or overuse of) microservices I think it really has more to do with a different enterprise habit: understaffing and budget constraint. Microservices and client-side applications, from my perspective, very rarely have long-term maintainers. Instead, things get done in cycles and then for most of the year a given service does not receive anything besides some maintenance updates or low-hanging fixes. That makes it look like a microservice is expendable and easier to replace to the people who manage resources, staffing, and budgets. Thus, things now look "modular" to the people who fund the ship that everyone else drives.


This is the comment I was looking for. “Microservices” as fashion is often a bad idea. As with any architecture decision you can’t just choose something that other people have called a “best practice” and pretend you did your due diligence, unless you understand why it is a best practice and know how it applies to your particular situation.


I see a lot of people acting like microservices are some conspiracy theory pushed on us engineers. I’ve never worked anywhere that pushed microservices, the places I’ve used them they tended to be additional functionality we could easily decouple from the standard backend. Even if they were I like the idea of microservices, having everything as abstracted away from each other as possible. Also would probably make code easier to onboard, just get a junior up to speed on one service at a time.


As I build out my infrastructure for Adama (my real-time SaaS for state machines), I'm leaning hard into a monolithic design. The key reason is to minimize cost and maximize performance.

For instance, comparing Adama to the services needed to build similar experiences offered by AWS has interesting results. Adama costs 97% less than AWS ( https://www.adama-platform.com/2022/03/18/progress-on-new-da... ), and a key thing is that the microservice approach is amenable to metering every interaction which scales linear to demand whilst a monolithic approaches condenses compute and memory.


97% less cost!

That means that the AWS service based option costs 33 times more. Not ten time more, but thirty plus times more.


I've been at a place where a single person is juggling twenty microservices to power a product with barely any users. Just the infra cost alone makes it insane.

"But one day, when we get massive growth, it will all be worth it", he says.

Alas that day may not come, since he is busy configuring load balancers and message queues instead of developing features.


I really like the way Uncle Bob described Microservices in this article: https://blog.cleancoder.com/uncle-bob/2014/10/01/CleanMicros...

He made the point that micro-services are a deployment method not an architecture. A good clean architecture shouldn't care how it's deployed. If you need to move from plugins to micro-services to be massively scalable your architecture shouldn't care. If you need to move from micro-services to plugins to make your app simple to host and debug, your architecture should also not care.

This strategy has been implemented in frameworks like abp.io very successfully. You can start your application as a single collection of split assemblies deployed as a single application and move to deploying as micro-services when it's necessary.


Also linked from that article [0] [1]

[0] https://www.linkedin.com/pulse/20140604121818-6461201-seven-... (I think that's it, original link doesn't work anymore, this looks like a copy of it)

[1] http://highscalability.com/blog/2014/4/8/microservices-not-a...


In my experience, 95% of the people advocating microservices can't properly explain ACID. Not to mention advanced concurrency concepts like MVCC.

Regarding division of work, those advocates seem to have forgotten that libraries exists.

Or that you can deploy a monolith and still scale endpoints independently.

Microservices might be a good fit for tiny fraction of real world scenarios.


Whether you build microservices or just services, distributed systems are undeniably here to stay. In today’s world there are very few products that can run on a single machine, whether it is for latency or availability or redundancy.

That said, the challenges of building such systems are real, and the developer experience is universally quite awful compared to our monolithic, single-server past.

It’s for that reason that I’ve been building [1] for the past four years. Would love your feedback if the OP resonates with you.

[1] https://encore.dev


This article fails to mention anything about team size which should be the first criteria for any decision about "microservices" (in quotes because it's just new terminology someone made up because service-oriented architecture wasn't cool anymore and they had to prove how original and modern their thinking was). Half of the engineering orgs chasing this fad are <50 people and have absolutely no reason to be adding the overhead of an SOA. The rules of thumb should be if you have more than 1 service per 4 engineers it's too many; and if your total engineering org size is not big enough to support a dedicated infra team of >4 engineers working full-time on tools just to support the other teams then you're not big enough.

Over-complicating things to pad your resume might get you into FAANG but it won't make you a good engineer or a good entrepreneur. The sign of a truly senior engineer is one who knows how to keep things as simple as possible to solve real problems while maximizing power to weight ratio of their code. The resume-driven-development anti-pattern is pretending that the problems facing 1000 or 10000 person orgs are your problems. Those large companies got to where they are by solving the problems in front of them, and you won't get to that scale if you don't do the same.


I dunno, I think "We are doing this" because it solves a ton of problems related to running an application at scale. He doesn't mention any of the difficulties that using monoliths have caused "us" over the years.

Packing multiple concerns into a single instance/VM seems particularly cavalier. What if one of the services crashes the OS based on novel user input that is being sent over and over? I think it's naive to say that perfect testing and strongly/statically typed languages make this problem go away.

A large team iterating on a set of monoliths runs into other problems as well. Ready to deploy, but wait, you need to rebase for someone else's changes, oh wait, now your tests don't pass, etc...

A set of monoliths certainly simplifies things for some projects, but many of "us" have battle scars from managing large cloud-based projects in that way, and find the deployment complexity of having a proliferation of independent entities that can have their own life cycle to be crucial for velocity and availability. At least that is why some "us" do it this way.


> teams want to make their own choices, dislike code review or overbearing ‘architects’ above them, and to a lesser extent want to use different and newer languages.

I'm not sure this is 100% correct - or, at least, has never been the case in the 20 years or so I've been working with "microservice" architectures. There's always an architecture team dictating the form of the services themselves - usually much more so than a module in a monolithic application. Part of this standardization is usually a set of languages - you can use Python (with Django) or Java (with Spring Boot), but anything else has to be approved by the committee.

That said, I agree with is ultimate conclusion that microservices haven't lived up to their promise. The usual justification for microservices was and continues to be "we can update one component without disturbing the others". I've never seen that actually happen. Every time one component changes, everything that depends on it has to be retested just to be on the safe side (and most of the time, something is found).


Well, if you don't have tests to ensure a service adheres to it's original promises, then yes, you'll need to run the full integrated system (I would probably do it anyway, depending on the context).

Reminds me of the fun time a major financial institution had one of their internal services provide fully 6 historical versions of it's api. Interestingly, they had not a single (automated) test for any of these.


I miss Ruby, a lot. It’s just makes developers happy. It just makes sense, and I am so sad that microservices killed my livelihood. Nowadays, there’s few Ruby jobs, w/ many competitions because fewer company uses it. Now been working for JS/TS & Microservice for 5 years, but I am still longing for Ruby. I wish Ruby is still around. I wish to be happy again.


Ruby and Ruby on Rails is definitely still here! I'm currently job searching specifically for full-stack roles including Rails, and I'm at least finding some success, but I do find many of the JS/TS/microservices roles you describe as well.

There are many companies still successfully using Rails, don't lose hope!


I think the real reason is that cloud software development has enabled microservices. Meaning, if you arent on AWS or GCP, forget it. But if you are, its a paradigm that fits very well with cloud software architecture. Those criticizing it for the complexity, probably just havent spent enough time in code bases where it makes sense.


For me, part of the sales pitch of splitting different workflows into distinct processes was that it often turns out that in a healthy system you need about twice as much hardware for Service A as you did for Service B, so if they are separate clusters (or separate pods) you can do that without wasting hardware.

Then I remember that half the time we only notice a problem with service A because service B started getting slower, and that it can take a lot of servers to make sure that you aren't "wasting" servers.

Sometimes a little slack can be cheaper. Sometimes side effects shorten the feedback loop for a group that has a bad habit of letting problems fester until someone on another team notices.


If you don't really understand how they work, you get blog posts like this. Microservices add many design benefits that prioritize rapid independent work. Monoliths add many design benefits that prioritize simplification and tight integration, testing and releasing. Both exist for good reasons, and both have to be implemented properly. Neither of them is a silver bullet.

But the real reason we use microservices is they're just more popular. Nobody wants to recommend something obscure or old for fear they'll be laughed out of the office. Nobody can hire tech people to work on 'uncool' technology. People like to follow trends. That's why we are doing this.


If you don't know how to write large scale maintainable monoliths then you don't know how to write large scale maintainable micro services. Micro services are inherently more complex than monoliths.


> If we think about resilient systems, the most resilient systems are the ones with the least number of moving parts. The same is true for the fastest systems.

in some sense yes, in some sense no. a monolith with a denial of service vector in part of its functionality can suddenly take down your entire fleet because everything was exactly the same.

in much the same that domestic bananas are more or less one virus/bacteria away from being wiped out (again), a monolith very susceptible to any kind of correlated failure because the blast radius is most likely the entire monolith.


Couldn’t this be solved at the load balancer level by only allowing each running instance of the monolith to serve its own subset of the routes?

It can even be done dynamically on the fly. If the instances begin to struggle, divide them in half and see which half has the problem. Then divide that one in half, and so on, until you have isolated the problem. Then stop those requests altogether and have capacity over for all the people desperately hitting F5.

So now you have both DOS resilience and independent scaling, all without requiring any changes to the architecture or knowing in advance exactly what you should have factored out into its own service.


> a monolith with a denial of service vector in part of its functionality can suddenly take down your entire fleet because everything was exactly the same.

Just like a microservice can bring down the core functions of your service because a single component locks up under DoS


I wonder how much of this can be traced back to the proliferation of System Design interview stages, and resulting training material?

From my experience all of these resources tend to direct you to think in microservices / uber-distributed architectures. I can easily imagine this causing folks to consider this as "the way we do systems now" and taking it to extremes.

The low barrier as well for 1 engineer to decide they want to just get busy on a weekend on their own and build something is another way I've seen this proliferate.


Because UNIX philosophy will sneak into any profession and somehow be accepted without question because of Antifragile psychology effects?

People will drink rat poison, as long as simple, natural, local, small scale, etc.

People don't want what is technically best. They want what makes them feel strong and smart.

They LOOOOVEEE to add challenges and problems because they want to build a world where people are "tested". If something is such a good design that it just works without a lot of talent they get bored.


Work at a bank. Microservices have allowed replacement as tech preferences or performance requirements have changed. I haven’t always liked the changes but they were enabled by the architecture. Can shift one service at a time, run both, leave some services alone, no dependency.

Note: lots of systems, lots of technologies, from SAP to Rust. Not all microservice by a long shot but those have worked pretty well.


"Do we have to release often? No, we do not, because the release process almost never changes and there are so few components."

This is non-sequitur: release frequency is not dictated by release process - although it is affected by it - but by business requirements.

"We never really had “monoliths” before in development that I experienced".

Good for the OP, I guess. I currently involved in development of 3 monolith services. All are 6-7 years behind on the framework they were based on and making any sweeping refactorings is cost prohibitive and tantamount to a full rewrite. Were they architected as microservices (which entails not just code, but infra), major refactorings would've been on the table.

Code review policy feelings doesn't make a lot of sense to me either. Company's policy is all code goes through review. How one feels about it is not relevant, it's a part of one's job. Microservices or not, code gets reviewed.


My favorite part of a micro service is when I walk into a project I'm unfamiliar with I can just dig into it. I've worked with a few monolithic applications for years without truly understanding half of what they do altogether. A micro service I can wrap my head around in a day or two.


I'm just leaving my first job because - among other things - I really start dreading adding business features to a 20 year old huge monolith riddled with terrible code and even more terrible OOP inheritance roller coasters of madness - up, down, up, up, down, up, down ... who thinks this has anything to do with good abstraction wtf? - It certainly doesn't help that the company has a "it always worked so we will never change" kind of attitude towards absolutely everything.

Currently, I couldn't be more excited to start my new position in which I'll be responsible for speed, reliability & security of millions of cloud transactions in a microservice backend.

That being said, let's see how I feel in 6 months. Experience need to be made by oneself.


This really reads like a bad-faith interpretation of microservices. Of course microservices suck whenever someone starts a new one to do something that shares code or logic with already existing services. Likewise, of course monoliths suck when people just willy nilly add code and modules for each thing; low quality programming is low quality regardless of where it lives

Real microservices implementations don't deliver so much from the proposed statement about synchronous web-tier request handling vs asynchronous compute workers in my experience. A few easy rules to guide this development are even touched on in the article, but treated like they could only ever be a tenet of monoliths.

Very strange.


"Microservices" is just a tool that can help a technology company scale. Like any tool, when used improperly or with the wrong architecture, can be very bad for those involved.

Given how many fail to implement it right, it would help if we had a paradigm that caused less failure, but I don't see the author recommending anything except for going back to monoliths. I understand their perspective but I'm not sure this is an either or scenario.

The solution is probably some new paradigm we haven't figured out yet.


As somebody working on software pipelines that do not use a micro service architecture, I’ve seen it becomes hard to manage large applications with tons of business logic. Smaller teams with their well defined pieces of the puzzle seem to be a better fit when you’re working at scale.

Not, I’m not a software developer, and I work on some of the largest software pipelines in AWS so my view might be distorted by my lack of experience in the field and the massive scale we operate at.


Complexity breeds economy. It allows a greater number of members to participate in the value feed. Given the excess of cash in the economy, there has been less incentive to do things efficiently in many segments of the software industry. There is a glut of people exiting previous career paths, attending boot camp, so they too can participate. Modern web app development reminds of the portrayal of the British civil service portrayed in Yes, (Prime) Minister.


> A Technical Solution To A People Problem

I spend like 98% of my time dealing with people problems. I will trade technical optimization for communication optimization every single time.


Stop talking about how inappropriate microservices are for applications that will never scale, they're a goldmine for consultants contracting with "CIOs" that every middling sized company decided they needed because they heard about ransomeware on Fox News. Billable hours out the wazoo, converting totally reasonable monoliths into microservices that can't be maintained by the clients and will always go over time and over budget.


The problem with modularization and its supposed benefit of autonomy is that the universe disagrees. Everything is connected whether you like it or not.

A pure microservice ideally has no state, or if it does, its own data store. That's awesome, until the front-end team wants a joined result (query) from multiple microservices, and thus data stores.

What are you going to do now? Build a proxy microservice in front of it? With shit performance, no referential integrity, and hard dependencies? More likely, you won't do that, so the front-end is going to be looping calls. 50 network requests with each having parsing overhead just to render a simple list of things.

The autonomous team has their own roadmap, which in a connected universe (the one I live in) is a problem, not a solution. The business team really needs that mobile app to be shipped soon, but the micro service team has no room on the backlog for another 6 months to build the part needed. A business prioritization problem? Perhaps, but it just shows that autonomy is largely a fantasy. The reality is that as soon as something is "one team away" everything becomes dramatically slower and less flexible. That's the price of autonomy.

All of this is like a 100 times worse than the ridiculed traditional software stacks, but hey, I'm not complaining. In a perverted way I benefit from delusional tech choices. It keeps me paid.


Right there with you.


If you only have a few large services that work quickly and efficiently, you probably don't have many teams working on them and have little overhead / devops work. If you have thousands of microservices, suddenly you need many more architecture, platform, SSO / auth / ident teams, security teams, etc. Creates more jobs.


A compact, functional monolith can be leaked and downloaded under a warez link somewhere, or taken by a former employee to a new company. A sprawling byzantine landscape of poorly documented microservices can’t be pirated (or even reproduced legally) without resources close to the original creator.


I never got exited about micro services because I understood that the complexity would be unreal.

I was watching as the industry went all in on something fundamentally wrong. It didn't make anything better, it made everything worse.

Now we are looking back and wanting monorepos, monoliths with simple code, and low complexity.


I do not have a problem with a a product being implemented as Microservices if:

1. The Microservice platform itself is mature and stable. See: VMware Tanzu. 2. The devs implement traceability. See: CorrelationID 3. There is good cross-training between the Dev and SRE teams.

Actually. All 3 apply to Monoliths too.


I somewhat disagree with this article. Microservice is great for complex software modules. But some jokers in industry starts writing miscroservice for every small function/method in the code. It's better that microservice should be designed by experts.


I can’t even tell what the author is trying to say. This reads like a machine translation.


I'm of the opinion there are pros and cons to both approaches. Which of the two approaches is better for a company is dependent upon the relevant experience and preference/work style of the developers.

The grass is always greener, as they say...


> Microservices: Why Are We Doing This?

1. Because many developers have no idea that it is possible to produce a “local modular application”.

2. Plumbing, technologies and deployment pipelines have become more important than a business problem domain.


I believe there is a middle ground ... monolith + satellite services


SOAs when implemented correctly don’t have any of the issues Michael describes; for one, cascading errors shouldn’t happen and devs should test for this in their game days.


I've started my current project with microservices and I regret it. If I could start over, I'd simply use Rails and focus on the business instead.


To people from the world of smaller companies, "microservices" means creating a lot of JSON / GraphQL services and trying to make them talk to each other reliably.

People from big companies are also doing microservices but they don't necessarily call it that. They use typed inter-process communication technology such as protocol buffer / thrift etc, and work in a monorepo with statically typed languages. This makes things much more likely to work.

I suspect that in general smaller companies doing microservices should move in the direction of what the bigger companies are doing.


At what size does a microservice become a service?


> At what size does a microservice become a service?

There is no answer to that, because "microservice" is a meaningless buzzword at first place, like "cloud", "no code" or "serverless". These are mostly marketing concerns, not engineering concerns. Marketing has taken over web development.


I prefer to use the term 'Hexagonal architecture based software' instead of 'monolith'

I think this should be tought in schools


I'm part of a team that is slowly breaking down our systems into microservices. Our old monoliths were a pain to maintain, the pain comes from overengineering rather than being a monolith. I don't fully buy that our business is better now because we use microservices. It just works well for us now that most of the business is moving to a new language.

We managed to write more performant code even though we're calling off to 10+ services each request. Did microservices make our applications faster? Of course not, but they clearly exposed the issues with our monoliths. We could make the same applications even faster if we moved back to a monolith, but the worry is when does it get back to the state we were in before?

It was not uncommon to have multiple pieces of code calling into the database to grab the same data in our monoliths. The fact is that it's way too easy to just DI a service where it doesn't need to be and boom, a pointless DB call. Do it in a few more places, add a bit of a spiderweb here and there, and you've just amplified the number of database requests doing the same thing. Yes, a lot of this comes down to fundamental issues with the architecture of the applications, things that have been stacked over years and years of tech debt. It's not an issue with a monolith, but rather how developers often treat them. There's a sense that everything is on the table because it's all in the same code base. A lot of developers don't care to think of the future implications of injecting a service where it doesn't belong, it does what they were asked and the person reviewing it thinks the same.

With microservices it feels like the decisions and behaviour of them are more public and there's more eyes seeing what they're doing. If someone is doing something weird, it's easier to call out since these interfaces are public. Previously all the shit code got hidden in random pull requests. Now the shit decisions are out in the open. Everyone is interacting with the same microservices, a shit API is shit for everyone, a slow service slows everyone else's services down, people seem to care more and make better decisions. There's still those guys who just don't give a shit and make a microservice into a macroservice. But when that happens it's easier to see now, it's in our faces, it's not 500 lines of code hidden in a library, it's easier to call out.

As time goes on I do long for a monolith again because personally I've learnt a lot from breaking down our systems into microservices. I know what touches what, I know what shouldn't touch what. The domain knowledge gained from this project would indefinitely lead to a better engineered monolith. But at the end of the day, microservices force these decisions into the open and less architectural mistakes are being made, which is good.

This is also a big reason why I'm a fan of Elixir/Erlang, you're almost forced to think in microservices and that leads to better decisions.

One mistake I think a lot of people make is creating a web of microservices. You want to keep the hierarchy as flat as possible so that each microservice is entirely independent of another. When you want to actually do work, you write an orchestrator that calls into each of these services to carry out the work. This orchestrator is not a consumable service, it's console app, it's a website, it's a product.


Even absent everything else, sometimes a fad driven rewrite is easier to sell than a fundamentals driven rewrite but you can still revisit all of the key decisions in the process and get a lot of advantages out of it.

I've used "microservices" as an argument before now in situations where switching to a microservice architecture for that part of the code was in and of itself not really a significant advantage (though not a significant disadvantage either) but it bought me the opportunity to clean up everything else about that functionality with an extremely good end result that I don't think I could've got without selling the change that way.


> isn’t this an argument for ... better hiring instead?

What are the odds the companies aren't already doing the best hiring they can?


>> A Technical Solution To A People Problem

I think the author has correctly identified the reason but the given explaination is wildly off base. There's a much better answer to this and interestingly it predates the term "microservice" by around 40 years!

In 1968 the Conway, Melvin E. had his paper "How Do Committes Invent?" published, you can read it here: https://www.melconway.com/Home/pdf/committees.pdf (since the article author mentions Waterfall which also has an excellent paper behind it; this one, like that, is super readable and accessible).

TL;DR "organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations"


It's a case by case scenario.

I built my project in a microservice architecture, so when the author states that microservices were built so that teams can enjoy their independence, it doesn't apply to me as I am currently a 1 man team.

I split my services up for many reasons, but primarily two, the ease of migrating from python to go as the ease of rebuilding one service at a time differs as opposed to a full blown rebuild, and also for performance and scaling as parts of my application will be hit harder by outside requests than others.

>Web requests can be managed by one type of instance, that results in one EC2 image or whatever. Anything that can be handled within the lifecycle of one request can be handled there, and these instances are horizontally scaled behind a load balancer.

The author gave an extremely simple and generic system design, and while this can work for a sizable amount of applications, there are still a significant amount of applications that require and demand a more complicated structure.

One of the services that I have split, almost exclusively deals with real time connectivity with websockets which requires a towering amount of performance as opposed to the other services that I have - to place these in a monolithic structure, scaling would be incredibly awkward - imagine adding 10 more load balanced boxes just so you can handle your websocket requests, but now the part of your app that deals with all your http requests is now also horizontally scaled when it didn't need to be.

>On the communications front, internal web services are often doubly inefficient by using REST rather than binary transmissions. There’s no reason for any of this, and if multiple microservices hops are used, this all adds up and slows down the system. Even just a conversion to JSON and back is a wasted effort, more so if done dozens of times.

This is true - while initially developing internal communication with GRPC, I've reverted back to http despite the 30-50ms TLS/SSL handshake simply because its a tired and true technology.

GRPC is still relatively new, with seemingly insubstantial development, and I was afraid to proceed further as roadblocks and technical debt may be accumulated in the future.

However, in a smaller mesh, in my opinion, anything shorter than ~150ms is insubstantial in my opinion - A blink of an eye is just about 150ms, and if it is the case that the internal requests are hitting multiple endpoints before resolving the original request, then it's more of an architectural problem, not that "microservices" are a problem.

Not everything has to be finely chopped, but breaking some portions down can make it more digestible


Frank was going to die anyway.


> Microservices are, in that essence, a religious belief that we should approach with skepticism - as is true with many things in software.

This is a ridiculous strawman. The point of microservices is higher-level architecture and design: to have discrete components in your system that fulfill a single responsibility — do one thing, and do it well—, so that reasoning about the entire system as a whole becomes simpler. It allows the microservice to expose its implementation behind a well-defined API, and thus keep its privates private. Additionally, microservices permit those (sub-)systems to scale independently.

Monoliths can do some of that, but often the "keeping the implementation private" is the hard part. When every part of the system has access to a database, people will reach behind the API & just get the data they need. It's not impossible to prevent, per se, but having the service entirely separate makes for a much better, stronger separation that forces the API design & planning that would not otherwise occur, as it would otherwise require a level of discipline that I don't think today's PMs and "agile"/scrum permit to exist.

Now, the article tries to address one of my points,

> In the usual web application, this is not a problem, because load and tested ensures each VM will be tested to it’s autoscaling parameters, and then it will grow.

No, it most certainly does not… I've seen plenty of VMs in my career running monoliths that were 90+% idle and most RAM free, because the application was bottle-necked on the database. And even if they're "monoliths", there's inevitably some other service not part of the monolith (either b/c it is third-party, or what) that then gets its own ASG, it's own set of VMs for redundancy … and it is waste. Never have I seen exactly 3 VMs running exactly 1 monolith.

(& VMs are like the worse case, too, as they are inevitably hand-crafted snowflakes. But worse, if a dependency, such as a package, is required… what part of the system required it? If I remove a use, is the package still required? Answering these requires reasoning over the entire monolith, something that, once the codebase is big enough, becomes effectively impossible.)

> On the communications front, internal web services are often doubly inefficient by using REST rather than binary transmissions.

… this is beyond wrong and it doesn't make any sense. You can serve Protocol Buffers over REST … is that not a "binary" transmission? (Not to mention that HTTP/2 & later is a binary protocol…) Sure, many people use JSON today, but there's no requirement to do that, and I've written several RESTful endpoints that didn't serve or consume JSON. (Generally because the requirements were such that that would make no sense.) The protobuf vs. JSON is a whole different debate, and each format has its pros & cons, but it is certainly orthogonal to the question of whether microservices are good or bad…

> Code that needs to be shared between the asynchronous services and the web tier should be kept in libraries used by both of them and is not a service call.

If you're going to do a monolith, yeah, this is what one should be doing. I've just literally never seen it done. (In fact, I've suggested it, mulitple times, when the second, third, fourth, fifth use case comes up: "we have code for that, but it isn't in library form. Let's solidify it into a library, & then change the existing consumers to use it, and then your use case is just another consumer" is inevitably met with "but I just want to do $whatever_it_is_thats_the_use_case it's just one more instance of this code, what could it hurt?" … followed by "why am I hitting $corner_case" and "well, that's some old organic growth; the original code, that we would have turned into a library, handles that…")

> The number of which does not really matter, but in a world of 200 microservices

This is strawman is repeated in every "I hate microservices" article. I've never seen microservices taken to that extreme, and yeah, if that's what you're doing, I expect you're in for a world of hurt. But that's not the point, and I doubt you actually have 200 well-defined systems with well-defined boundaries & APIs. But yes, if you take something to the absurd, it breaks down?


This is the story of software development. We have no formal way of defining which style is better. It's just anecdotal arguments over and over again with no proof.

Like evolution, you would think the technology gets better through natural selection. However, selection pressure in the real world is vague. The best doesn't necessarily win (just what works) and microservices is a form of genetic drift.


Is that because the right answer depends on a multitude of things (ie neither is always better).

The problem you're trying to solve, the size/expertise of your team, scale, customer expectations, legacy integrations.


One answer could actually be better than the other. However these answers and problems are just essays of qualitative experiences. People can debate with each other endlessly on these things but until these answers are formally defined so we can create logical theorems off of them, we will never know definitively which is better.

As long as there's no catastrophic failure nature doesn't always necessarily choose the most fittest mutation. Just what works better than the competition. And there are multiple factors at play here. A company with better marketing could do better than a company with better technology, hence selection pressure on technology is negated.

What ends up succeeding is the cohesive whole. Every metric of the company including the CEO, funding, marketing, luck and everything else being the best defines "fittest". This means if everything else is the best but your technology is the worst you still succeed. Hence bad technology continues to propagate and exist within the industry.

It's sort of the same reason why cancer still exists. Why hasn't natural selection eliminated cancer?


Does the same answer need to apply to a two person startup and a multinational tech giant?


I don't usually see companies with microservices. It's more like there's this one big monolithic service with a bunch of satellite services orbiting the big one.

The only time where microservices truly exist is if you're part of a big company that has a multitude of initiatives. But within a scope of a single project, there's almost always a mothership.


Some people just like to be countercultural for the sake of it, and apparently the author is one of those people. Is there a word for reverse cargo culting? Cargo hipstering?

Not the apt, "The cargo is from a plane!" but rather, "We never needed any of these supplies to begin with! We were better off starving!"

It's wholly unsurprising that the next series of topics the author plans to discuss are mental health. That makes perfect sense, given the rest of the article.


Microservices is what allows one to scale independently.


20% legitimate Conway's law concerns, and 80% cargo culting other orgs' Conway's law concerns in an effort to feel like you're a big boy org with big boy org problems.


To me the core point is just that all access to persistent data should be mediated through a service interface which implements access controls and restricts what operations available in the underlying data store are exposed to other clients. Whether that qualifies as a "micro service" seems like mostly just a semantic game.


I hate to say it, but I told about 100 people this long long ago before they invested hard into microservices...

The same people who said to me that the pandemic would just last a few weeks.

The same people who though a PT Cruiser was a beautiful car.

The same people who believed dogecoin and NFTs would make them rich.

People keep willfully failing because there's no incentive for them to be credible and accountable... There's just all the money they can make from perpetuating lies.

Sometimes we need to unplug the microphone, or ask the more quiet individuals what they think.


Microservices actually allow teams to work independently of each other (once there are agreed on interface contracts) and more importantly, decouple schedule - I don't have to wait or switch to something else if you're 3 days late delivering your component; it is usually easy to mock the inputs and outputs and build test harnesses and do lots of things that are more difficult with monoliths.

HOWEVER, I don't understand why people get religious about methodologies. Every time a new fad comes out, I don't look at it as the "One True Way", I just evaluate its strengths and weaknesses and add it to my toolbox. I also am not religious about following all the precepts of the new religion. I'll mix agile and waterfall if I want. I will conduct my daily status meetings how I want and call them scrum, even if it's just to piss off zealots. I will sit down during standup. I'll mix microservices and monoliths. I will figure out the value I want from a methodology and be happy when I get that; I don't believe in utopia anymore.


> I don't have to wait or switch to something else if you're 3 days late delivering your component; it is usually easy to mock the inputs and outputs

If you know the spec for inputs and outputs, wouldn't you be able to do the exact same in a monolith?


Depends on merging, etc. A lot of times when you develop monolithically you develop in different branches and some of your pieces won't show up until the next merge. You can unit test but can't integration test in many cases.


I think the complexity claims for microservices are way overblown. There are a set of trade offs to be made on both microservices and monoliths.

For microservices you need a good idea of how you’re going to do IPC and how you’re going to maintain isolated state. In most cases there are reasonably easy solutions to both problems, but in general there is a little bit more more up-front work to do when starting with microservices.

For monoliths you need a good understanding of how you are going to upgrade a running system without downtime, and how you are going to stop developers from taking stupid shortcuts that create invisible internal coupling. IMO, if your team is big enough to have the necessary processes to get this right, it’s big enough to deal with microservices too.

So I honestly don’t get the hate from some on HN for microservices. You can fuck anything up if you try hard enough, but microservices epitomise the principles of good systems engineering, most particularly around separation of concerns. I honestly don’t understand why anyone would choose a monolithic architecture for a new build in 2022.


The reason for choosing monolithic architecture is simple: velocity. You need to move fast and be nimble until you are able to find a product that people want and/or need to use.

The people who (at least initially) advocated for microservices were the first to point out that it is better to start as a monolith and then refactor into microservices as required by external factors.


I’ve built large systems with both architectures and I strongly disagree that monoliths are more nimble or that they give you more velocity. I don’t think this claim stands up to much scrutiny at all.

I know the advice is to build a monolith first, and I used to think that too. But I now think it’s probably not good advice.

Each to their own, but I won’t be building a monolith again.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: