Hacker News new | past | comments | ask | show | jobs | submit login

Convincing people that microservices are not a cure-all but just another design pattern.

You have to start out with a monolith and only if you realise along the way that some components might work better as a service (micro or not) you should extract those. Until then, commonplace modularisation will serve you just fine.

Once you have more than 1 microservice running infrastructure becomes a huge problem. There's no real turnkey solution for deploying and running internal / on-premises microservices yet. You basically have to build monitoring, orchestration and logging infrastructure yourself.




One rules of thumb is "one team one service". If you have multiple teams working on a service then it might start making sense to migrate to multiple microservices.


That's a nice rule of thumb but it's radically different from the team I work on. We run half of one logical service (visible to users outside the team), but that logical service is implemented with a dozen or more microservices internally. We have user-facing services which need high priority, different flavors of batch jobs which need to be orchestrated and prioritized, and various other pieces of infrastructure.

The number of different services is close to the number of people on the team.

This is working very well for us, and it provides us with some welcome isolation when there are problems with one of the microservices. Maybe we can go into read-only mode or stop processing batch jobs for a while, depending on what services have problems.

But we also have good infrastructure support, which makes this a lot easier.


Yeah I could totally see how if you have very strict uptime requirements and you want to allow different pieces of the infrastructure to be able go down at different times then it's an exception to the rule.

Just for every team that I see that has a good use case for micro services, and does the hard work of instrumentation and deployment, I see 8 teams that go with microservices because they think it's a magic bullet. Then they don't spend the time and effort necessary to get instrumentation and orchestration up and running. They don't aggregate logs, they don't spend the time to create defined contracts between the services, they don't make services robust to the failure of other services. They just complicate their debugging, deployment, uptime, and performance scenario without getting any of the benefits.


In this case, we don't have strict uptime requirements. But there are enough times where our integration tests don't catch some kind of error, and it's nice that the service doesn't have to go completely down for that.

It's also a lot easier to prioritize process scheduling than it is to prioritize thread scheduling.


I would make an entirely different case.

I had a system where the main running cost was MySQL. It turned out that I needed to provision a lot of MySQL capacity because there was one table that had a high rate of selects and updates.

The hot table did not use many MySQL features and could easily be handled by a key-value store with highly tuned data structures. That's a place where a "microservice" which has it's own address space, if not machine, makes it possible to scale parts of the system that need to be scaled without scaling the rest.


I see no problem in a "monolith" using different kinds of databases.


Sure, but peeling off something that has radically different scaling properties is easier if you put a "web service" in the way.


Still, based on what you're describing it was MySQL in your case that was the bottleneck. In that case you scale database, either by sharding or creating separate ones for different usecases. It is irrelevant to whether you use microservices or not.


I don't understand why putting a web service in front of mysql and a key value store makes scaling easier. Would you mind explaining?


Simple. Most of the database is "large" in terms of data (say 50M rows) but that database gets maybe 10,000 updates a day. The read load is well-controlled with caching.

One table is small in terms of data but involves an interactive service that might generate 50 updates/sec at peak times.

With the "hot" service implemented on top of a key-value store, the database is the ultimate commodity, I have many choices such as in-memory with logging, off-heap storage, distributed key-value stores, etc.

The service is not "in front" of MySQL, but is on the side of it so far as the app is concerned.


Agree - the biggest gain microservices give you is they allow many-teams to make progress together. If you don't have many teams then the overhead may not be worth it.


That's a pretty good rule of thumb - nice


I agree with you that these things should probably start with monoliths and then migrate to microservices.

I'm lucky that the project that I'm working on has support to use JHipster (https://jhipster.github.io/) with microservices deployed to OpenShift (https://www.openshift.com/). I used MiniShift to test my deployments, metrics, monitoring, orchestration, and logging locally. This was mostly for a proof-of-concept.


We should upvote this to the top, it's a pretty good summary of the whole situation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: