Hacker News new | past | comments | ask | show | jobs | submit login
Should that be a microservice? Factors to keep in mind (pivotal.io)
213 points by jbkavungal on April 20, 2019 | hide | past | favorite | 80 comments



The article does not address any of the technical complexities of microservices. If you use microservices you are developing a distributed computing system with a lot of technical challenges: additional network delay for each rpc, error propagation, consistency, etc. Addressing such technical challenges will often consume lots of time and requires a lot of knowledge that most developers do not have. It's a huge investment with a very high risk.

Furthermore this article fails to list any sensible alternatives. For example, if you have different rates of change you simply need a good separation of concerns with clearly defined boundaries and interfaces. You can do this equally well in a monolithic application, with none of the distributed computing issues.


Agreed, microservices is literally a last resort in my book.

It's the thing you reach for when you're hiring a new team worth of developers every other week and can't avoid developer blockage any other way.

It's not even a good architecture for most of the things it ends up being used on.

There are very few problem sets that a microservices arch actually excells at.

Almost always a monolithic with some SOA and clear domain boundaries is going to work better for static/slow growth teams.

It might be exciting for some but I wouldn't wish it on anyone.


> Almost always a monolithic with some SOA and clear domain boundaries is going to work better for static/slow growth teams.

If it has “some SOA and clear domain boundaries” its not a monolith, even if it isn't decomposed so far as to be microservices.

“Right-sized services” isn't a catchy meme-spreading label, but it's probably the synthesis pointed to by microservices as the antithesis to monoliths.


Monolithic just means it can carry out all it's required functionality alone, i.e. it's [major] dependencies are internalized.

It doesn't say anything about not being able to be broken into modularized sections, services, or domain boundaries.

Neither does it reject any setups where satellite systems, micro or otherwise make use of the monolith programmatically or otherwise, hell it's kond of part of the definition of a monolith that it has multiple access points (i.e. cli, sdk, rest, rpc, events.)


> if you have different rates of change you simply need a good separation of concerns with clearly defined boundaries and interfaces. You can do this equally well in a monolithic application, with none of the distributed computing issues.

Exactly my thoughts - all of the benefits discussed are centred around source code management, specifically, things tied into the source code management which have poor separation of concerns: ungranular tests, ungranular CI...

This doesn't have anything to do with it being loosely coupled code that aught to have an API and then _maybe_, _potentially_ is amenable to being run as a separate service and is worth the cost of that complexity.


> requires a lot of knowledge that most developers do not have

Any resource you would recommend?


I was an early preacher of microservices. In theory, it was great. Independently scalable, rewriteable in any language, and of course a fault in one piece won't bring down the others.

I learned the hard way what it really meant. A fault in one affects the others. What should be a simple bug fix requires changing code and deploying in multiple applications of various states of disrepair. A true nightmare.

In large enough teams for applications under development, I still think microservices are a good choice. But for myself and my small team, monolith applications are so much easier and faster to maintain.


Microservice design (or rather, any good distributed design over multiple factors such as time, location, knowledge, structure, QoS, programming language) is dependent on the following criteria:

- Orthogonality: service X is independent of its requirements of service Y. Your mail client is not a web browser. Your payments platform is not a web-shop.

- Programming against interfaces. Your interface should be relatively stable over iterations, while your implementation can change.

- Comprehensible interface by short-term memory. This means the service/component does not entail more than, say, seven ‘items’. For example, an authentication service should not have more than seven concepts (token, credentials, state, resource owner and so on)

- Related to orthogonality: failure of this service should not entail (immediate) failure of another. This is akin to how the Arpanet was designed.

- No singletons. Services should not be designed in such a way that only one, and exacly one is running.

Follow these guidelines, and micro-service design becomes manageable and scalable.


You say you were using microservices, but you say that a bug in one service led to redeploying many services. That sounds really odd, and like you weren't doing microservices at all, but actually just a monolith with many pieces.

I think "Can I deploy a bugfix without redeploying the world" is one of the base criteria for determining if you have implemented microservices or not.


> You say you were using microservices, but you say that a bug in one service led to redeploying many services. That sounds really odd, and like you weren't doing microservices at all, but actually just a monolith with many pieces.

But the problem with microservices is that there is nothing to prevent you from creating these types of potentially incompatible dependencies. If you are separating functionality using functions, the compiler helps you spot problems. If using classes or libraries, the linker can help spot problems.

With microservices, there aren't good tools for preventing these types of problems from arising. Maybe one day Docker, kubernetes, or something else will make it easy to create and enforce real boundaries. However, as long as microservices are just an "idea" without a set of tools that help you enforce that idea, it's very easy to introduce dependencies and bugs in ostensibly "independent" microservices.


> But the problem with microservices is that there is nothing to prevent you from creating these types of potentially incompatible dependencies.

Sure, it's totally a matter of idiom - this is why I stated that a big problem with microservices is people jumping into it thinking it's SOA. Microservices, as a discipline, requires some care. It could be argued that the care required is too high to be worth it though.

> If you are separating functionality using functions, the compiler helps you spot problems. If using classes or libraries, the linker can help spot problems.

Maybe you could be more specific? I don't see how a compiler will help prevent coupling of modules/ internal components. What problems are you referring to?

I agree about microservices being an idea without an obvious implementation - that's a fair criticism.


>> If you are separating functionality using functions, the compiler helps you spot problems.

> Maybe you could be more specific? I don't see how a compiler will help prevent coupling of modules/ internal components.

A super simple but effective way compilers prevent bad behavior is by preventing circular includes/imports. In a programming language, if module A imports module B, then module B can't easily import module A.* The compiler/interpreter will complain about circular/infinite imports. This general rule motivates developers to organize their code hierarchically, from high level abstractions down to lower level, and militates against inter-dependencies.

In contrast, there's nothing to stop microservice A from contacting microservice B, which in turn contacts back to microservice A. It's so easy to do, that many will do it without even realizing it. If you're developing microservice B, you may not even know that microservice A is using it.

Designed correctly, microservices can be another great way to separate functionality. The problem is actually designing them correctly, and keeping that design enforced.

* Sure, there are ways of effectively doing circular imports in programming languages, but they usually require some active choice by the developer. It's hard to do it by accident.



I don't think this is a "No true Scotsman" situation. I didn't defend microservices on the premise that "Good" users will do it right, I stated that what is described is not microservices.

Similarly, if I build a microservice architecture, and then say "My monolith is too hard to manage because it's in so many pieces" I think it would be fair to say "You didn't build a monolith".


> I stated that what is described is not microservices.

You're sliding the definition of microservices over from the obvious one (many small services[1]), that's what's a 'No True Scotsman.'

With that said, I don't think you necessarily committed a fallacy, it's just a matter of phrasing. "You weren't doing microservices at all" is the fallacy, but the underlying message is sound: It's not enough to split a monolith into services, you also need to spend some time architecting the services to play well with each other.

But I think it's unhealthy to say "you didn't do multiservices" instead of "this issue isn't inherent to multiservices, you can overcome it," because the former sets up multiservices to be a silver bullet.

[1] We can expand this definition to how Fowler and the like define it and still run into 'split monolith' problems (dependency issues being the biggest in my mind).


I disagree about the definition sliding. SOA is closer to "many small services", Microservices is SOA + a set of practices and patterns. You can do SOA and not be doing microservices.

It sounds like what they did was attempt to split a monolith into separate services, which is distinctly different from a microservice approach.


Maybe that microservice needed to change its method signature because it was missing something.


and so you don't have a microservice, but just code that's spread out.

a feature of microservices is a way to add new apis while retaining old apis (for compatibility). So you'd write the new signature, and deploy. Everything still works fine. Then the other services can be slowly updated to use the new signature, one at a time.


What if all the other services using the old API are doing the wrong thing because the old API was wrong? The bug isn't fixed until they all use the new API.

As an extreme example, just to highlight, what if you have a login service where you only take the username and hand back an access token. This has the exploit that anyone can get super user access by simply passing the right name. So you patch the login service to take a password as well.

But the exploit is live until all the other services is using the new API... so wouldn't you want to prevent the old API to be used, breaking any service that hasn't been updated yet?


If the interface is so wrong that the implementers actually can't use it safely, that's not a microservices problem any more than it's a monolithic architecture problem.

It's important to design interfaces before they are implemented everywhere. And the D in SOLID stands for Dependency Inversion, I think it applies here. It asks:

When you design a dependency relationship between a big (concrete) thing and a bunch of little (concrete) things, which way should the dependency relationship between them be arranged? Should the big thing depend on the little things, or is it the inverse?

There might seem to be an obvious right answer, but the way I phrased it is deliberately deceptive because according to Dependency Inversion the answer is neither. You should have an abstract interface between them that both depend on, as I understand it.

(and I'm learning, I got my CS degree almost 10 years ago but just now finding out about these things, so nobody feel bad if you didn't know this... and anyone else who can explain better is also welcome)

This principle is important here because keeping interfaces simple and abstract makes it easier to spot issues like this one, that should never make it into production. An authentication interface that only takes one parameter username is something that would never make it past the first design review meeting, unless it was a specification buried in a majorly overcomplicated concrete implementation (or if they didn't look at the interfaces at all).


> If the interface is so wrong that the implementers actually can't use it safely, that's not a microservices problem any more than it's a monolithic architecture problem.

Right. My point was that there are code bugs and architectural bugs, and from what I can see microservices only really help with the first of those.


Then you redeploy everything. But you're making that call based on risk, not due to a technical requirement.


Yeah, unless it is a bug like I said. Maybe you forgot that you need to save X as a well and that all services that use it must pass X otherwise the data saved is invalid.


This will only be true if the bug fix had local impact.

If instead the discovered bug was more fundamental in nature (i.e. a problem was found in the actual design/implmentation of the signature/api itself) then every service using that api will need to change.


So you're making the same change but taking longer to do so?


I think that's fine; if a user action depends on a service being up, and it's down, the whole system appears to be down to the user. There is no way to make an unreliable component reliable without taking it out of the critical path for a request.

Consider a case where you have a bunch of apps that generate PDFs. There are two ways to structure the system; either every application bundles the PDF-rendering logic (i.e., a copy of Chrome and its dependencies) and needs to be provisioned to run it, or you have it as a standalone service that each application calls for its PDF rendering needs.

There are advantages in either approach. First consider the monolithic approach. Say the "foo" service finds some crazy edge case that causes your PDF renderer to use all the RAM on the machine it's running on. The foo service obviously goes down. But the bar service, that doesn't have data that triggers that edge case, remains up, because they are isolated.

If you have the PDF rendering functionality out in a microservice, then foo's bad behavior prevents bar from rendering PDFs, and both applications appear broken for anything that involves generating a PDF. But of course, only the PDF rendering functionality is affected. If foo is in a crash loop because its built-in PDF renderer is broken, it can't do anything. If it's just getting errors from the PDF service, everything else it does still works, so that might be better.

You also have to consider the maintenance costs. The PDF renderer has a 0 day security problem. How do you update it in all 100 of your applications? It's going to be a pain. But if it's all behind an RPC service, you just update that one service, and all the other apps are fixed. The downside, of course, is what if you want some experimental new feature only in one app's PDF renders, but not in the others? That is harder to do when you only have one version of the renderer deployed globally; if each app bundled their own, you could safely upgrade one app to the experimental version without affecting all the others.

So my TL;DR is: it's complicated. There is no one true answer, rather there are multiple solutions all with separate sets of tradeoffs. I generally prefer microservices because my development strategy is "fix a problem and forget it". It is easy to do the "forget it" part when new features can't interact with old features except through a battle-tested API. Your mileage may vary.


Side question: so you found the best way to generate PDFs was Chrome? I’ve recently looked into this and seems like the best approach, renders nicely and can use html etc, but the fact that it has to spawn an external process irks me a bit.


I ended up using Puppeteer, wrapped with a node app that translates gRPC requests containing the various static files, returning the bytes of the rendered PDF. I did not dig fully into figuring out the best way to deal with the Chrome sandbox; I just gave my container the SYS_ADMIN capability which I am sure I will someday regret. Full details are available here: https://github.com/GoogleChrome/puppeteer/blob/master/docs/t...

I see no reason not to open-source it but I haven't bothered to do so. Send me an email (in profile) and I'll see to it happening. (It is easy to write. All the complexity is dealing with gRPC's rather awkward support for node, opentracing, prometheus, and all that good stuff. If you don't use gRPC, opentracing, and prometheus... you can just cut-n-paste the example from their website. My only advice is to wait for DOMContentLoaded to trigger rendering, rather than the default functionality of waiting 500ms for all network actions to complete. Using DOMContentLoaded, it's about 300ms end-to-end to render a PDF. With the default behavior, it's more than 1 second.)

Before Puppeteer I tried to make wkhtmltopdf work... but it has a lot of interesting quirks. For example, the version on my Ubuntu workstation understands modern CSS like flexbox, but the version that I was able to get into a Docker container didn't. (That would be the "patched qt" version in Docker, versus the "unpatched qt" version on Ubuntu. Obviously I could just run the Ubuntu version in a Docker container, but at that point I decided it was probably not going to be The Solution That Lasts For A While and we would eventually run into some HTML/CSS feature that wkhtmltopdf didn't support, so I decided to suck it up and just run Chrome.)

The main reason I didn't consider Puppeteer immediately is that Chrome on my workstation always uses like 100 billion terabytes of RAM. In production, we use t3.medium machines with 4G of RAM. I was worried that it was going to be very bloated and not fit on those machines. I was pleasantly surprised to see that it only uses around 200MB of RAM when undergoing a stress test.


I have a c# lambda in aws for taking screen grabs and PDFs of pages. If the service is running and hasn’t idled out it takes ~2 seconds. Takes about 8-15 on first run. Sometimes I’m willing to accept.


There’s CEF, which is effectively Chrome as a library (it’s one of the targets for the build process E.g. Mac, windows, iPhone, CEF). There are various projects that then build on top of it, like CEF python.


I know it's condescending to point out that "you are doing it wrong", but in this case, it really seems like your microservices implementation was way off. How come one microservice coming down affects the others?


A single microservice going offline?

Or, an existing API contract is broken, and the fix has update requirements for consumer services?

Maybe that's microservice the "wrong way" but I sure see a lot of it.


I appreciate the effort that the authors have put in to synthesise all these points into this article, but I think I disagree with almost every one of the technical points.

1. Multiple Rates of Change: Why must changing one module in a monolith take longer than changing the same module in its own service? Perhaps a better use of time would be to improve CI+CD on the monolith.

2. Independent Life Cycles: Ditto. If the tests partition neatly across modules, why not just run the tests for those modules? If they don't, not running all your tests seems more likely to let bugs through, wasting the investment in writing said tests.

3. Independent Scalability: How bad is it to have a few more copies of the code for account administration module loaded even if it's not serving a lot of requests? The hot endpoints determine how many servers you need; the cold ones don't matter much. And load is stochastic: what is hot and what is cold will change over time and is not trivial to predict. If you separate the services, you have to over-provision every service to account for its variability individually, with no opportunity to take advantage of inversely correlated loads on different modules in the monolith.

4. Isolated Failure: Why not wrap the flaky external service in a library within the monolith? And if another persistence store is needed to cache, it is no harder to hook that up to a monolith.

5. Simplify Interactions: Ditto. The Façade pattern works just as well in a library.

6. Freedom to choose the right tech: This is true, but as they say having all these extra technologies comes with a lot of extra dev training, dev hiring and ops costs. Maybe it would have been better to use a 'second best' technology that the team already has rather than the best technology that it doesn't, once those costs are accounted for.

The cultural points are largely valid for large organisations, but I feel like it positions microservices as an antidote to waterfall methods and long release cycles, which I think could be more effectively addressed with agile practices and CI+CD. Splitting out a service might certainly be a good way for a large organisation to experiment with those practices, however.


> 6. Freedom to choose the right tech: This is true, but as they say having all these extra technologies comes with a lot of extra dev training, dev hiring and ops costs. Maybe it would have been better to use a 'second best' technology that the team already has rather than the best technology that it doesn't, once those costs are accounted for.

A variation of this pushed me to implement a couple parts of an application as micro-services even though I wanted to stick with a monolith. I wanted to use existing open-source packages written in languages other than the primary one that I had chosen for my monolith. It's really a shame that a cross-language, in-process ABI like COM didn't catch on outside of Windows.


A cross language, in process ABI does exist for all platforms. It's called the C ABI. Almost every programming language can export and import C ABI functions and data.


Sure, but that's lower level than COM or WinRT. And in practice, does anyone embed, say, a Node.js module inside a JVM application via a C library?


I read the whole thing as "here are some factors that may nudge you toward micro services, within the broader context that splitting your app into micro services has a big pile of complexity and other costs".

So it's not that any of those factors necessarily mean micro services are the right answer, but that some combination of them might make micro the right answer for some slice of a system in some context.


This all looks good on paper.

In practice, this assumes that all of the concerns presented there will necessarily become bottlenecks. And that is simply not true.

You can have modules with completely different deployment lifecycles, speed of development and performance characteristics as part of a single service and, for the most part, things will be totally fine.

This really reeks of premature optimization. A badly separated service is a gigantic hassle down the line.

In general, service boundaries become obvious to developers when they start seeing the friction. But the default answer should always be a no unless there is proof that friction is there. The overhead of maintaining multiple services, repos, CIs and so on is not trivial for s team of 10 people or less.


>The overhead of maintaining multiple services, repos, CIs and so on is not trivial for s team of 10 people or less.

This is why the big guys use mono repos. It makes it trivial to split off services from a code perspective. You still need to manage the ops but if it's code that could have run in another service the ops is mostly a copy/paste of an existing deployment.

As for knowing where the service boundry is, that's something that can evolve just as easily as is does in a monolith codebase. Sometimes your class structure is right and sometimes you need to retool it. It shouldn't be the end of the world to update an internal service.


I don't think the argument that monolith should be the default is super compelling. If you are already answering "yes" to some of these questions you are likely to start benefiting in the short term from microservices - in particular for issues like "multiple teams own parts of the project and release at different cadences".

While the article mentions separate CI/repos/etc, that's not actually necessary or even super relevant to microservices. To my knowledge there is no real link between microservices and isolation of build/ deployment systems.

To me, the big issue with people adopting microservices is not understanding that it doesn't just mean SOA. If you just start splitting your service into multiple processes you gain some benefits, but you're going to incur huge overhead for releases - you probably have tons of shared code/ leaky interfaces. I think this causes most of the pain people have with microservices - and I think that a big part of people making this mistake is building a monolith (when they've answered "yes" to some of the proposed questions in the article) and then trying to split it later, often rushed, because that approach isn't going well.

Personally, my current side project is implemented using microservices. I rewrote one of them this weekend, and that service gets by far the most attention. I also use multiple languages, which is important and allows me to move really quickly (I fall back to Python for library support sometimes so that I can move faster). There's only a single shared library that virtually never updates (it's a fairly low level representation of a graph), and so I therefor virtually never have to worry about redeploying the world.

It's not perfect - I have to figure out how to deal with some of the downsides at some point, but I believe microservice-first was the right call. Oh and it uses a single repository, currently.


> you probably have tons of shared code/ leaky interfaces

Let's say that you have two microservices that work on the same business entity which is important to your application - eg, an invoice in an invoicing application. Not having shared code means that that you need to write from scratch all the code that deals with that entity for both services. If you later need to add some new attributes which are relevant to both services, you'll also need to independently update the code in both services, including the testing, and to do double the debugging.

If the services are really "micro", this will happen even more times for the same entity.

How can you do this without doing a lot more coding than if you had shared code?


Well, I think one answer is to add another service in front of that shared business entity that exposes an interface to match the domain of the consumers. Now the shared codebase should be a stable client.

But yeah, of course there's shared code between your codebases - things like JSON libraries, etc, are always going to be shared. The issue is more about what you're sharing - if it's a very stable, small library, it's probably fine. If it's something really core that's going to change a lot, or that two services may diverge in agreement upon, duplication may be worthwhile.


But this is really not any more useful than a monolith. You're applying a complicated criterion for something that should be simplifying my life, and now it's just making things harder in every services.

Again, the default should be a no.


It really is trivial to automate the process of creating a new service with a corresponding repo and CI pipeline. And with tools like giter8 it's not that hard to standardise on service layout and components as well. You could easily do it in a few hours.

Likewise just using modules looks good on paper but in practice most languages make it too easy to reference code in other modules. And so very quickly you end up with spaghetti code. So regardless of whether you use modules or services you still need to do a lot of upfront work in setting and enforcing some design/architectural rules.


Agreed. We built cookiecutter templates at edX for the basic Django service and Ansible playbook. We built shared libraries for OAuth and other shared functionality.

That said, much of this did not come into place until we were on the third microservice. Starting with templates and libraries for the very first service would have definitely been premature optimization, and have resulted in wasted time and later rework.


It's just as easy to create spaghetti with microservices. Circular dependencies all over the place. Discipline is discipline no matter what you use. Microservices just make lack of discipline harder to fix.


" service boundaries become obvious to developers when they start seeing the friction."

I like this. Splice when the pain starts to show, or becomes obvious.


I would agree people are a bit too enthusiastic about microservices these days. The stack I work on is, IMO, broken into too many microservices (admittedly, I had a large part in the decisions that resulted in that). I wouldn't say it's made things terrible but it has slowed down velocity and makes the system as a whole harder to reason about. If I were to re-architect from scratch, I'd probably slash about at least a third of the microservices we have. In the future I'd be inclined to keep sufficiently modular such that various components could be broken out as separate services if needed, but only do so if compelling reasons to do so presented themselves.


So would you consider more of an SOA approach in the future, as compared to micro services?

Also, when your team architected the microservices originally, how many previous microservices systems had y'all built? I haven't built any and am just trying to get a feel for the learning curve.


As someone on a dev team with little experience with microservices, including myself initially, you're going to make a big mess before you know how to clean it up. Take eventual consistency seriously. Limit rpc across services as much as possible. Agree on standards and your source of truth as early as possible. CI/CD the small stuff that doesn't matter (linting, test running). Also, have a good integration/e2e testing set up early - mock tests ended up being very brittle for us.


Agree with all of this, especially:

> have a good integration/e2e testing set up early - mock tests ended up being very brittle for us.

Unfortunately microservices also can make good integration testing difficult when it comes to testing full e2e functionality. The results of testing e2e, by nature, rely on the sum of the services interacting with each other properly, thus creating testing dependencies between services that negate some of the advantages of having separated them in the first place.


Is it only me or are they missing the most important question: will this service have an API that provides a reasonable abstraction and is stable enough to provide backwards compatibility to its consumers?

If not, a lot of the independent deployment niceness will become a major nightmare when consumers of the service's API need to be adapted (if that's even possible).


> Independent Scalability

This reason for microservices comes up over and over but I’ve never understood it.

If you scale a monolith, only the hot code paths will use any resources. Having a monolith makes it _easier_ to scale as you can just scale a single app and it will be perfectly utilized. If you have microservices you need to make sure every single one has headroom for increased load.

Scaling up a whole monolith only to increase capacity for one code path doesn’t have any unnecessary overhead as far as I can see. Remembering that if your microservice is on a VM then you are scaling up the OS too which has millions of lines of code - something no one (rightly) is concerned about.

Am I missing something?


I think you're right that this point is often overstated.

It does isolate each service from the scaling _issues_ of the others somewhat - if one service gets overloaded and crashes or hangs then other components are less likely to get brought down with it. In the best case if work is piling onto a queue and the overloaded component can be scaled fast enough even dependent components might not notice too much disruption.

Another advantage is that you can scale the hardware in heterogeneous ways - one part of the app may need fast I/O and the rest of the app might not care about disk speed much at all, so you can have different hardware for those components without overspending on the storage for the rest of the app. I think that's a minor advantage in most cases though.

A sort of in-between house is to have multiple copies of a monolith which are specialised through load balancing and/or configuration values and put onto different hardware, but all running the same code. Probably not that widely useful a technique but one that we've found useful when running similar long running tasks in both interactive and scheduled contexts.


Well, the hot path isn't necessarily the good behaviour one you would choose to scale.

e.g. login service DoS'd, but logged in users can continue using widget service, rather than it grinding to a halt as the login path heats up and consumes all resources.


back in the day we had a common micro service. it was called pgbouncer. Because database resources are limited it’s nice to have a stable consistent connection limited a set number of processes and then let the application monolith scale independently. Also, when you are scaling across multiple machines you don’t need all of the code of the monolith on every machine. i’ve heard about When amazon switched to a SaaS model, creating aws they were locked into a 32 bit monolith in which code was limited to a few GB and thus scaling resources independent was valuable. Was this useful? I am collecting data on if i’m polite, without snark and helpful.


You're assuming an app that scales linearly with hardware. That's very hard to engineer. That's in fact the problem we're trying to solve: The hardware's there but the app can't make use of it due to some contention somewhere.


?

If you scale a monolith you have to scale the whole thing.

You seem to be describing that a monolith is more efficient that a microservice in a non scaling situation, which is true, but I think you have missed the point on scaling.


DDD isn't synonymous with microservices. Although it does help decouple a lot of your internal components so that they can be split into isolated services later on, there's nothing stopping you from hosting them in a single monolith. In fact there's very little downside of doing it that way.

The number of merge conflicts is reduced because the code has good internal separation. Scaling can be done on the entirity of the monolith that is simpler than trying to do this on a microservice level.

I'm working on a DDD typescript framework at https://www.npmjs.com/package/@node-ts/ddd that helps better structure node apps for domain driven design


There is a 7th main factor. You should have multiple teams. It's sad how most of pro-microservice articles forget to mention this point explicitly. For most IT companies microservices don't make sense at all.


Exactly this. Conway's Law should be taken not as an observation, but as advice. "Sure, you can have a service structure that doesn't match your org structure, but it's gonna cost ya."


If you have multiple teams, a well structured monolithic application where each module is it’s own service and only exposes functionality via an interface, how does being a microservice help? If each team also writes unit tests, when the other team makes changes, they can easily see if their change affects the other team.

If you are using a statically typed language, you can make a change in the signature of your function and automatically see where the change affects other parts of the code and do and automatic, guaranteed safe refactor.


This is my #1 reason for using them. Microservices can be great at delimiting responsibility.


It’s worth noting that you can accomplish a lot of the same goals via a single repo but separate out execution and deployment with distinct Dockerfiles.


It’s still over complicating things. If you just want a separation of concerns, reduced merge conflicts, and the optionality of separating a module out into a separate microservice later, you can accomplish that a lot easier in a monorepo with (speaking in C# terms, choose the equivalent for your language of choice) one solution, separate projects for separate “services” (domain contexts), exposing the assembly/module via public interfaces and using DI.


The problem is that you will not be able to run a monolith without a microservice. In fact, you'd be able to run an entire project without a lot of trouble. This is so much easier to debug and debug. Even with simple tools, it's almost impossible to debug code without refactoring a few dozen layers and building up another microservice. The question is what does the microservice plan on doing? When it needs to scale, it's an issue of how much time is spent on the development of the project. If the micro service plan is a waste of time, it will be a problem. This is especially true if you only have 2 servers running the microservice, and you can use docker containers to build and maintain your own servers which support everything (apache, docker); for your projects, it probably takes less than 5 minutes to figure it out, and when you start getting serious about running them with one or more different pieces of infrastructure you don't think twice about spending that much time in production. I'm not a huge fan of microservices: I think they will be hard to scale even before they become more of a problem.


A microservice seems to be just another level of abstraction. They follow in line from functions, to classes, modules, packages, and libraries. It's just the next higher level of abstraction, this time separated by network calls.

The positive side of microservies is that they can be scaled and updated independently. The inherit downside is that adding network calls can add race conditions and failure states.

However, the real trouble with microservices starts when they are misused. If service A calls to service B which calls to service C, which in turn modifies the state of service A, you can easily enter a world of pain. This type of behavior can happen with any abstraction (e.g., functions, classes, modules), but unfortunately, it is very easy to do with microservices, and happens all the time.

I do like the idea of separating truly independent services for scaling, but today we still need a better way to enforce or clarify that separation so that these types of dangerous dependencies don't creep up.

Just spit-balling here, but this type of dependency enforcement could happen at the container or container orchestration level. Perhaps Docker or kubernetes could prevent a service from accessing a resource if another upstream service accessed that same service.

If you need two services that talk to each other to also read and write to another service, you aren't really building microservices, you're building a distributed system.


I've never quite understood the logic behind #3. Why wouldn't I just scale my monolith up and down instead? I could see some instances where being able to do it independently would be useful (e.g. when the feature has unique resource requirements), but I would guess that, for us, 9 out of 10 times, it has been fine to just scale the monolith as a whole up and down as necessary.


Cost is the easy answer - as with any optimization, I guess.

It can also be really annoying sometimes to scale a monolith - a big one can have a lot of different parts like multiple threadpools/ database pools. One component hits a threshold, you add more instances, now your other component is overprovisioned, maybe even failing due to too many DB connections, or you hit weird thread contention issues.

I've had some pain in the past scaling monoliths in similar situations.


I just installed a Python package for a chatbot. Here are its dependencies:

"Installing collected packages: asn1crypto, six, pycparser, cffi, cryptography, PyMySQL, future, certifi, python-telegram-bot, beautifulsoup4, idna, urllib3, chardet, requests, wikipedia, lxml, requests-toolbelt, pymessenger, autocorrect, line-bot-sdk, click, Werkzeug, itsdangerous, MarkupSafe, Jinja2, Flask, python-engineio, python-socketio, Flask-SocketIO, pymongo, PyYAML, pytz, tzlocal, setuptools, APScheduler, SQLAlchemy, kik, nltk, textblob, emoji, PyJWT, pysocks, twilio, websocket-client, slackclient, oauthlib, requests-oauthlib, tweepy, wget, MetOffer, redis, viberbot, python-dateutil, sleekxmpp, programy".

Monolithic package. Has nailed-in support for all possible ways to communicate with it.

I'd be happier if the part that does the work was decoupled from all those interfaces. Especially after struggling with a build problem where it was building its own crypto library.


I'm still not convinced microservices and SOA are different paradigms. We were doing SOA with REST ages ago. Everything always boils down to common sense. It seems to me both paradigms lead to similar results.

Microservices are just a buzzword for the same thing we did before using current day technologies (node instances, cloud, Docker, Kubernetes, serverless etc).


I'll add another: Is it independently reusable?

Even if you aren't going to use it for something else, separating some of your code and releasing it as open source is a great way to get more people to test it -- and find bugs before they affect you in production.


This depends on what you are releasing, and how much support you’re willing to offer to the community. For example, the microservices and libraries I worked on at edX were used by the Open edX community, but rarely at the scale at which edX uses them. The community offered new features, but edX did much of the big fixing because we had more extensive knowledge and use of the software.


Is this a microservices issue or a scale issue in general? Seems like any open source tool that had users that operated at different scales would run into this issue, no matter the architecture, but I don't know what I am missing.


Project scale and scope. edX does most of its development in public repos. Not all of the features and functionality are used by every open source user. The scale of edX’s user base is so much larger than others’ that edX finds issues well before other users. Also, by the time a formal Open edX release is announced, edX has run the code for weeks or months.


For me the question remains... should a particular set of functionality be written as a "separate program". After all, isn't a microservice just an independent program that communicates with others using an industry standard protocol?


Should you even use microservices? I'd argue probably not. In my experience most people start looking at this because they hate their existing code (often for good reason) and just want to spend less time working with that code. It's just easier to do than refactor a monolith with serious problems...but then you have all the issues of a distributed system, and it's never been worth it in my experience.

If you find yourself in this situation, I'd recommend identifying the problems you have with your current monolith and hopefully refactor it, otherwise it's a lot more sane to gradually migrate to a new monolith by slowly moving responsibilities to it rather than attempting to go down the microservices route.


I am convinced that I am doomed to spend the next 10 years of my professional career converting micro services back into monoliths. 5 of those years will have to be incognito because microservices are not quite considered over just yet. it has already begun.


It's interesting that this [1] came up just a couple of days ago. Just like Linux 90% of the problems that microservices are trying to solve can be solved with modules and not have the non-trivial problems that microservices introduce.

[1] https://news.ycombinator.com/item?id=19689341


I know Betteridge's law of headlines wasn't designed for titles like this one, but it's pretty on point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: