Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Why Not Microservices?
41 points by rajacombinator 7 months ago | hide | past | favorite | 29 comments
It’s pretty clear that microservices have fallen out of favor on HN. As someone who’s currently experiencing pain on both sides pro and con here, I’m curious to learn more about the argument. Is there more to it than “you’re not Google, don’t act like Google” for small startups? How about companies that are scale?



In my experience, microservices are an emergent property of Conway's law [1], essentially that a system's architecture will tend to ship it's team(s)' communication structure. The biggest bottleneck to software engineering is communication between engineers, especially as teams grow beyond 3+ people working a single system. It becomes difficult to keep everyone on the same page about every design decision and change, which becomes obvious if your team does code reviews (imagine if every code change required 3+ approvals). Communication unfortunately scales O(n^2).

As a result, What makes a "team" is it's leadership structure, I.e it's determined by the number of engineers who want to be leaders (which includes accountability and ownership). If you have a team with multiple leader-types who want more ownership, and a natural seam in the solution space emerges to spin-off, they will tend to become the communication hubs for other engineers, and eventually spin out into their own subteam with their own microservice. They'll want their own abity to release when necessary, and choose dependencies without needing to convince everyone beyond their immediate subteam.

In short, if you're 1 person, microservices make little to no sense given the addition work and complexity involved. If you're more than 1 engineer, it depends on individual engineers willingness to be a lead and own a whole part of the system and become a communication hub. Companies like Google and Amazon will have lots of microservices because so many engineers want to be tech leads (either for promotion or self-fulfillment).

[1] https://en.m.wikipedia.org/wiki/Conway%27s_law


I think microservices can become a trap where you think you might need one, but actually you don't — a bit like blockchains a microservice structure solves very specific issues at a cost. Not everybody has these issues. Many people seemingly wanted to have these issues so they can justify using that hip tech, while forgetting the tradeoffs involved.

Drawing a line in the sand between parts of your service and creating common interfaces between them is not a bad idea, but I'd argue that doing so too early brings in a cost of constantly dealing with the communications within your service.

It is a similar issue to programmers who try to write everything generic from the start or optimize prematurely: it can get in the way of productivity and make your service inflexible in a development phase where it should still be very flexible.

That being said I believe the core ideas of microservices applied at the right time in the right project would do wonders, but just like with blockchain the hardest part might actually be to decide when to use it and when not to.


Careful consideration must be taken with a microservice architecture. The ideal situation is that each instance can work on its own and pass it along to the next microservice in your application. Having to reach out to other services for additional data would be kept to a minimum. This way, any given instance can be swapped out if or when it's needed without affecting the application.

However, as the application grows, it will most likely encounter scenarios where each microservice needs to do something more than what it was originally intended to do. A service might need information from an API or would need to access additional data from a database somewhere before it can work properly. And if that's the case, these hypothetical APIs and databases might be needed in other parts of the application as well. So now instead of everything being structured nicely with interchangable instances, you are now dealing with a network where any given node might not be able to be updated independently of the others. It can quickly become a large mess.

That aside, depending on the needs of your application and how it's set up, the various microservices can create a much larger cost than something like a monolithic system. Typically, each service would exist on its own instance, and your hosting provider will happily help you figure out your usage needs and the price that will come along with it.


As mentioned the most common motivation is for scaling the number of developers/teams. Other good motivations:

Differences in the maturity of software. You have a mission critical revenue generating core and newer experimental forays into new territory. You want to do this rapidly tolerating more risk but don't want to destabilize you cash cow.

Certain parts have differing requirements. e.g. a payments system may need greater auditing and compliance levels and separating it makes this easier to do.

The other often cited reason is ability to scale the infrastructure independently. This is still true even with cloud offerings, being able to size and configure parts differently can get you a long way into scaling. The alternative view is that you'll need to shard eventually and the sooner you figure this stuff out it can save you from having to implement a lot of intermediate scale solutions. Even with shards, you can still run into reasons to run parts of you system differently. e.g. choice of datastores, storage engines, isolation levels, etc.

I've worked in pretty much all these cases, as well as the let's do microservices from the start. That was hard as we were doing instant messaging and message delivery failure rates are critically important. At the same time, it wasn't unachievable with only a 3-4 pizza-sized number of devs. Not optimal though when the number of services =~ number of devs when you have turnover and have to learn/teach them all. I can say being able to change something and have CI run unit/integration/end-to-end/journey tests in a blue-green deployment to prod in 2 minutes is something I'll always miss.


* more than 10 teams? -> microservices. this brings adding some guidelines: strict communication guidelines (api, events, cross team meetings)

* compliance requirements? -> (micro)services. you need to host some data and workers in separate guarded environments

* too many legacy third party dependencies which might fail or stop whole processes? -> wrap with service

* different independent products in a company? services

for everything else use monoliths


+1 I think this is the most comprehensive answer. Default to using monoliths whenever possible. More Simple = more robust, more complex = more fragile.


When done correctly, microservices can replace a gigantic single point of failure.

Unfortunately, what I sometimes see is that it's replaced by multiple interacting single points of failures - i.e., if any one of them fails the whole rube goldberg architecture fails, and there are so many different things that can fail! ...which is clearly worse than just having a monolith.


An issue I've seen happen is as the number of microservices grows, no one is keeping track of the service dependencies. So what happens is service A goes down, and then someone says "Service B is down too." and then people start thinking it's some wider outage (they always blame the network) but it turns out that Service B just depends on Service A, but it wasn't widely known.

The issue then becomes that teams don't have a good understanding of the expected impact due to an outage. They think microservices are helping to minimize the blast radius, but they really just don't know what the true blast radius is.


If a service A going down brings service B down, then microservices aren't divided right and/or too much data is being shared over the network rather than async transformed/copied. The only time this should matter is if there's a breaking change to the way services communicate which should be very well known, e.g. having to bump a protocol version number with downstream understanding before upstream.


Yeah, everyone knows this. The question as I understood it, was to ask what are some downsides to microservices. And that, to me, includes how they are implemented and managed in the real world, rather than some perfect hypothetical world where everyone did everything by the book.


There is no such hypothetical book. I've read many of them and collectively don't cover what you'll need to know or do. There's also very few things everyone knows about microservices. Choosing the right bounded contexts is the biggest knowledge gap and most examples talk about noun1 service and noun2 service.

Learning about microservices by reading books that distill it into abstract examples is like learning Haskell by reading about category theory. Maybe possible but challenging and not for everyone.


I think where people get into trouble is when they try to implement a true microservice architecture without actually understanding the tradeoffs and complexities. You will likely end up with a spaghetti rube goldberg mess with weird service dependencies.

I also think people tend to strive for too much purity. It's okay if you want to avoid the replication and event bus complexity of microservices and choose a monolith-esque database architecture. Just split your APIs across business domain, leave your database as it is, and be done with it.

Just make sure you stick to the fundamental principles of microservices. Scoped, fault tolerant, independently deployable. Fault tolerant does not 100% have to include database failures. Don't ever directly cascade API calls from one service to another.

The biggest "why not" is that you should choose the architecture that best solves the problem. Don't treat architecture as a playground to try out random shit. Solve the business problem in the simplest way possible. In most scenarios you probably don't need isolated, fault tolerant, distributed services. I would guess that 75% of microservices implementations fall under the premature optimization anti-pattern.


I work in an organization with a lot of internal line-of-business apps for manufacturing. While there might be hundreds or thousands of users, the problem just doesn't really warrant the extra steps that microservices requires to really work. (If my users want reporting, we can just run reports against the main database. We don't need some Report service implementing CQRS or Event Sourcing to put all the disparate pieces together...)

However, I will say we've taken some considerations from the microservices discussion to heart and thought more about fault-tolerance, ci/cd, and streamlining the deployment process so we've definitely gotten some value from the zeitgeist.

That said, most of our apps end up being a fairly relaxed monolith with some service integrations here and there. And you know what? It's pretty nice sometimes :)


One disastrous way microservices architectures can fail is when they stop being aligned with features and therefore stop test-driven development from working.

The promise of test-driven development is that there is a vanishing amount of code that is not under some sort of meaningful test coverage. Those tests have to target the smallest possible parts of the system to be efficient, else tests are just too brittle and slow.

Test-driven development works best for self-contained systems that represent the whole feature. Using it encourages architecturing the system in this way.

There are three environments where it doesn't work: graphical user interfaces, monoliths and distributed systems. The biggest issue with these is the huge state space to test and their comparative slowness. Also, they are usually expensive to set up and maintain.


Microservices can be good. The problem is that people don't use them right (partially because everyone was joining in the fade, so lots of bad examples). Dependency management, documentation, and loose coupling are important. I work at a company that embraced microservices, but it's basically architected as a distributed monolithic. And there's so much overhead for context switching, paperwork, etc.

It's like how they embraced product teams. Yet in the past year it has been a constant musical chair or hot potato situation of managers reorganizing which team owns which app. The whole point of the product team is to give teams ownership and keep expertise with the app. We might as well be passing it off to a support team if they are doing this stuff.


Yea this sounds a lot like my current situation. Microservices sounded good, a lot were built over time, now no one wants to understand or own them so they’re just getting punted around aimlessly.


Microservices, in my experience, are chiefly about scaling teams (more freedom to move faster). In fewer cases, it is about optimizing a hot path. If it is not one of those, I'm having a hard time seeing why you would want the overhead of microservices.


Have you run into problems where it hinders team scaling by creating too much cross-team friction?


The big issues we had were because we had different services reaching into and editing the same db tables in some cases. Re-architecting to have all data access via a new API (instead of direct DB access) across 13 services ran by 8 different teams across 2 business orgs has proven hard.

The other thing that can be a pain is due to rigid agile sprints. Need a change? Well that might get into their next sprint but there are probably other, higher priorities, so maybe next quarter. Or team A just does the work on team B’s service, and waits for code reviews and quality checks.


I don't see anyone mentioning a definition of microservices... How can you talk about it if you don't define it?

I've split up an application for a project into separate containers with docker-compose. They shared some environment state with a dotfile. I don't have to reinvent the wheel and can just pull some services into it and tag them so I can easily rollback and roll forward. I can offload storage with Docker's storage driver. The services talk through sockets, so it's easy to verify the uptime. The services are implemented in different languages, because I don't believe that there exists one golden language that can solve all problems.

Is that microservices? I don't know, but it works really great and it doesn't feel like there's any overhead.

Another example. I work in a team where two services use JSON schema validation, and they can't be reconciled. Agreement between schemas takes so much overhead, because there are different expectations, leading to 90% of the time discussing schema changes in pull requests.


While I agree that microservices don't make sense for every use case and definitely not for small companies/platforms, I've done the monolith thing over and over again and for me, microservices truly are a superior platform model.

Yep, they introduce some new problems and by definition, more moving parts but if you're doing anything halfway complicated or "at scale", this is the only sane way to do it.

Docker was hyped a lot but it's a net win for me and for a good chunk of the industry. SPAs with pure API backends are better too. For me and what I do. I love using software defined infrastructure. I love when things are easy and reliable.

I love microservices.

But I also understand why many people don't and why they're a bad fit for some use cases. In some ways, they're kind of like NoSQL databases. If you're just doing microservices because they sound cool and modern, you're like all the folks who use MongoDB and complain that it doesn't work like Postgres. And then talk endless crap about MongoDB.

You're the problem. Not Mongo. Not Postgres. Not monoliths. Not microservices.


Between the monolith and the microservice architecture, there is the SOA (Service Oriented Architecture).

At Link Society[0], we're building a SaaS offering for one of our product (Kubirds[1]), the architecture is as follows:

  - A payment service (with Stripe)
  - An IAM service (with Kratos[2])
  - A Deployment service (which interacts with Kubernetes)
  - A Dashboard backend service (with Hasura[3])
They are not "micro" at all, but each their own task nonetheless.

  [0] - https://link-society.com (fr)
  [1] - https://kubirds.com
  [2] - https://www.ory.sh/kratos/
  [3] - https://hasura.io/


Many large applications have pushed back on the model. Microservices tend to solve problems of scaling teams, not scaling users.


Here's my opinion.

There are problems that are shared between monolith and microservice architectures, and there are problems that are different.

For the problems that are different, the ones faced in microservice architectures are considered "harder," and there are fewer battle-tested tools/ecosystems to help deal with those problems (though this second part is rapidly changing).

For the problems that are the same, adopting a microservice architecture will likely force you to solve those problems much earlier. This is dangerous because it takes effort which could be directed to improving your product instead, and you may find yourself solving them in a way which doesn't end up scaling for your organization.

I think they're great, but they require a level of consistent investment in tooling which is not in many organization's DNA.


The most important benefit of microservices is that allows development teams to work and deploy independently of each other. For example, instead of having 100 developers working on a single app, you have 12 teams of 8 working on one microservices each. But you have to be careful how you structure your teams, since that is going to drive the overall design of the system. (See Conwayś law)

In real life many companies break their apps in too many small microservices and you end up with a "distributed ball of mud". Which is way worst than the original monolith.


Software development is about managing complexity. Think about it like this. If the number of lines of code in a piece of software is N, then the amount of effort required for managing that software is N^2 (or N^1.5, or whatever). Some people believe that by properly organizing software development, you can minimize or completely get rid of the exponential growth of the complexity. The idea is that the amount of effort for managing software containing N lines of code should be N.

Microservices tries to organize software so that complexity is minimized. In my experience, they completely fail to do so. Rather they actually increase the complexity by putting related pieces of systems far apart from each other. I think it is better to "embrace the complexity" and to concede that software development is difficult. Better developer tools and stronger developers are better methods for dealing with complexity than in vain trying to minimize complexity.


I look at it as a part of staff size.

When a company gets large enough people start having time to write software that manages other software. And IMO microservices are a late-stage outcome of this.

Layers upon layers upon layers of abstraction being developed and maintained to help provide autonomy (within a certain context) while working with hundreds of other developers.

For the vast majority of use cases a few VMs or even bare metal cloud instances will be a lot more cost effective and stable.


I already noticed that microservices fell out of favor with current high cargo cult culture. But which buzzword compliant type of architecture is next?


Like any idea when taken to an extreme or dogmatically it causes more problems than it fixes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: