Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I especially dislike when people talk about microservices and scalability like they are a solution. Most companies won't see the kind of scale Facebook/Google/etc sees, not because they won't have that much clients, but also because there's a good chance that they don't need all clients in the same system. Most business don't benefit from the network effect. In that case, your scale is not the total sum of the size of all your clients like Facebook/Google/etc, it's just your biggest client!

Also, I don't like how the discussion about "microservices" is drowning the discussion about plain services. Services are a natural consequence of Conway's law when your organization gets bigger. You can see parallels with the first wave of OO programming for software projects, where the goal of having a public interface and private internals was to make projects scale when 10/100/1000 times more people are working on it. This is exactly what's happening with services: a team maintain a public API, and private internals (and the developpers from the other team depend on the internals anyways because reality isn't that easy). Then it's easy to see what microservices are: ravioli code.

In a way, services and microservices are OO programming in the Alan Kay sense: late binding, message passing, local retention and hiding of state.



Sometimes you just need the scalability. I work on a 2 programmer team making a data integration product. There's a whole lot of features, but one is that you can push a button and move like 10 years of ERP data from 1 system to another. This is done as a scalable microservice because running big migration can up our resources like 50X from baseline, and sometimes we're running 5 to 10 at once. Usually we're running 0. Between all services we utilize on average less than a gig of memory, but regularly allocate 10 or even 100s of gigs at once for short periods of time.


That's a great example that compliments nicely the remark of jayd16 here https://news.ycombinator.com/item?id=28636152

> microservices are just services with the acknowledgement that minimum size should not factor into breaking off a new service

Thanks to you both for the perspective.


> In a way, services and microservices are OO programming in the Alan Kay sense: late binding, message passing, local retention and hiding of state.

> This is exactly what's happening with services: a team maintain a public API, and private internals (and the developpers from the other team depend on the internals anyways because reality isn't that easy). Then it's easy to see what microservices are: ravioli code.

This is a moot point. Following up on you OO example, there is not reason you can have separate public and private implementations within a (modularized) monolith.

The biggest downsides of (modularized) monoliths lie in 1) scaling - e.g., you can independently scale and load balancer each service, 2) releases - you can have independent releases (at the risk of maintaining version compatibility).


>the discussion about "microservices" is drowning the discussion about plain services.

Yeah I think this really sets back the conversation when, in my mind, microservices are just services with the acknowledgement that minimum size should not factor into breaking off a new service.


> Most companies won't see the kind of scale Facebook/Google/etc sees, not because they won't have that much clients, but also because there's a good chance that they don't need all clients in the same system.

I feel this line of argument is disingenuous and is based on a mix of strong personal opinions and very limited to non-existent insight onto which problems other orgs experience.

For starters, the main selling point of microscopes is not performance. It is organizational advantages. It is a way to draw clear and crisp lines regarding ownership and ops. Small teams are able to design and deploy and run and troubleshoot small bits of a large machine without wasting time in back-and-forths with other teams, and they can put in place safeguards so that if other teams screw up then their blast radius is limited.

But regarding performance, let's not fool ourselves into believing that there's always a fatter network pipe and a beefier box to deploy to. Often there is, but often there simply isn't. Once you are forced to deploy your monolith into multiple instances, most of the criticism directed at microservices is rendered moot. Also, beefier boxes are far more expensive to operate, so let's not pretend there are no costs.

Also, there's the reliability aspect. If you want to have a reliable service then you need redundancy, which means multiple deployments. Once you have to deal with load balancers serving traffic to multiple instances of your monolith, there is no longer a significant architectural and ops difference in whether you operate a monolith or shave a service or two out of it. Thus most of the criticism directed at microservices is rendered moot.

Lastly, some orgs are ok with serving users in a very limited geographical region. That's perfectly fine. Other orgs might have users in different parts of the globe. Do you expect them to tolerate latencies in the 200-500ms range routinely, or do you find acceptable to reel in those latencies back to the 50ms territory by deploying a part of the service closer?

I get the Monolith first principle, but you're fooling yourself if you believe that only FAANG-level companies can benefit from it.


> For starters, the main selling point of microscopes is not performance. It is organizational advantages. It is a way to draw clear and crisp lines regarding ownership and ops. Small teams are able to design and deploy and run and troubleshoot small bits of a large machine without wasting time in back-and-forths with other teams, and they can put in place safeguards so that if other teams screw up then their blast radius is limited.

That's true, but at this point we should clarify what we mean by microservices. By "microservices", I mean "you end up with more services than engineers". If you've got a team for each service, then I would just call them services. I also don't agree that back and forth with other teams is always a waste of time, especially since just after this you talk about limiting their blast radius if they screw up. That's probably necessary if your org doubles every year or already has thousands of engineers, but that's not for everyone.

> But regarding performance, let's not fool ourselves into believing that there's always a fatter network pipe and a beefier box to deploy to. Often there is, but often there simply isn't.

I don't think we are talking about the same thing here. My point was that if you don't have network effect (like Twitter or Facebook), there's no need to put all of your users into the same box. You could have the extreme opposite, one instance for each users. So you only need to be able to handle your biggest client.

> Once you are forced to deploy your monolith into multiple instances, most of the criticism directed at microservices is rendered moot.

I don't think that's true. Having multiple instance of a monolith is not the same as having multiple different services.

> Also, there's the reliability aspect. If you want to have a reliable service then you need redundancy, which means multiple deployments. Once you have to deal with load balancers serving traffic to multiple instances of your monolith, there is no longer a significant architectural and ops difference in whether you operate a monolith or shave a service or two out of it. Thus most of the criticism directed at microservices is rendered moot.

> Lastly, some orgs are ok with serving users in a very limited geographical region. That's perfectly fine. Other orgs might have users in different parts of the globe. Do you expect them to tolerate latencies in the 200-500ms range routinely, or do you find acceptable to reel in those latencies back to the 50ms territory by deploying a part of the service closer?

Again, I don't think that's true. You can have one/multiple monolith instances in each region. You may argue that it's not strictly a monolith if there is a routing and load-balancing service in front of it, and that's true, but it's also not a microservice architecture.


> That's true, but at this point we should clarify what we mean by microservices. By "microservices", I mean "you end up with more services than engineers".

That's not what "microservices" is.

If you are not familiar with the concept then you should refrain from post personal assertions on guidelines or best practices.


> That's not what "microservices" is.

That's how it sometimes ends up. You said:

> For starters, the main selling point of microscopes is not performance. It is organizational advantages. It is a way to draw clear and crisp lines regarding ownership and ops.

But that's just regular old services. The "micro" in "microservices" usually implies smaller services, and one extreme of that is having more services than engineers. That's something I saw happen, heard about and read about. Microservices can be abused by making them too small and making too many of them. Uber is an example of that.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: