Hacker News new | past | comments | ask | show | jobs | submit login
Microservices (basho.com)
314 points by otoolep on Sept 15, 2016 | hide | past | favorite | 146 comments



You need to be this tall to use [micro] services:

* Basic Monitoring, instrumentation, health checks

* Distributed logging, tracing

* Ready to isolate not just code, but whole build+test+package+promote for every service

* Can define upstream/downstream/compile-time/runtime dependencies clearly for each service

* Know how to build, expose and maintain good APIs and contracts

* Ready to honor b/w and f/w compatibility, even if you're the same person consuming this service on the other side

* Good unit testing skills and readiness to do more (as you add more microservices it gets harder to bring everything up, hence more unit/contract/api test driven and lesser e2e driven)

* Aware of [micro] service vs modules vs libraries, distributed monolith, coordinated releases, database-driven integration, etc

* Know infrastructure automation (you'll need more of it)

* Have working CI/CD infrastructure

* Have or ready to invest in development tooling, shared libraries, internal artifact registries, etc

* Have engineering methodologies and process-tools to split down features and develop/track/release them across multiple services (xp, pivotal, scrum, etc)

* A lot more that doesn't come to mind immediately

Thing is - these are all generally good engineering practices.

But with monoliths, you can get away without having to do them. There is the "login to server, clone, run some commands, start a stupid nohup daemon and run ps/top/tail to monitor" way. But with microservices, your average engineering standards have to be really high. Its not enough if you have good developers. You need great engineers.


That is a good list. At an even simpler level, perhaps we can summarize:

Microservices necessitate the application of a more rigorous set of engineering practices to all service infrastructure components and therefore carry a greater overhead than traditional development methodologies - rigorous engineering does not come free. Whether that trade-off makes sense for any given project is a question of resources and requirements.

I feel two salient points were not mentioned: (1) Popular microservice orchestration/infrastructure management approaches are not universally applicable; their limitations should be recognized before assuming applicability. (2) The webhost is currently down; perhaps the author should have used a scalable or distributed cluster of microservices ;)


That's pretty spot on. I once made an abbreviated flowchart of the above for my microservices article:

https://www.stavros.io/posts/microservices-cargo-cult/

I urge everyone to use it to decide whether they really need microservices or not.


I know it is a bit tongue in cheek but if you're working somewhere where Data Segregation can be solved by "a simple cascading delete" you aren't operating anywhere close to where any service architecture is relevant.


ha this is a great flowchart!


I dunno. Given the fuzzy definition of "microservice" I tend to think of "OTP Applications" as logical microservices... ones you can even build a monolith (or distributed monolith) out of if you want.


You can add to your list all the sysadmin, devops complexity to the power of two, new single-point-of-failure issues, SLAs, backups, retention of backups.... Basically, you are multiplying the complexity of the ops people.


Awesome inventory. I'd add service discovery and distributed key/value store for configuration data


Well said, thanks!


This is a great list.

When micro services work, it's because they made it easy to verify each of these [obvious] bullet points. For some jobs, file this under premature contemplation.

When other methods work... My top two are "clarity of focus" and "[relative] lots of unnecessary labour". "Lifecycle" takes a coalition third.


This looks like almost a direct copy of one of Matt Stine's popular talks.


I'm sorry I'm not aware of the person, but would be interested in reading/seeing it. A quick google search yielded https://blog.pivotal.io/pivotal-perspectives/features/revisi... - Is this the one you referred?

Btw - the list is not an invention worth copying. All items in that are not novel or unique, they are general practices that every good engineer would have their own version of. I'd like to see the talk and try to add a few more items to this.


In my experience, once microservice, it's very hard to run the whole system in local environment. It's nearly impossible to overview the system anymore.


Sounds like a great product.


They're nearly always a major violation of KISS, in other words.


I didn't realize pivotal and XP were different things - anonymous pivotal labs pivot, clutching his stained copy of XP explained


Microservices, like nosql databases, and complex deployment systems (docker) are very important solutions to problems a very small percentage of the development community has.

It just so happens that the portion of the community is the one most looked up to by the rest of the community so a sort of cargo cult mentality forms around them.

A differentiator in your productivity as a non-huge-company could well be in not using these tools. There are exceptions, of course, where the problem does call for huge-company solutions, but they're rarer than most people expect.


My company built from the ground up with micro-architecture and it is an unmitigated disaster. Totally unnecessary, mind-numbing problems unrelated to end-user features, unpredictability at every step, huge coordination tax, overly-complex deployments, borderline impossible to re-create, >50% energy devoted to "infrastructure", dozens of repos, etc.

The whole thing could be trivially built as a monolith on Rails/Django/Express. But that's not exciting.


Interestingly, the Sam Newman book about Microservices specifically says it's easier to succeed by starting with a monolith that you then break up. That advice does seem to be ignored by people who want a microservice architecture because it looks good on their CV, or because they think it will magically solve a bunch of hard problems they don't know how to solve.


I believe it.

The best place to place a module boundary - and the best format for communication across that boundary - is rarely completely transparent from the outset. With a monolith it's relatively easy (if not actually easy) to fiddle with those details until you get them right. With a service it can be very difficult to iterate on this stuff, so unless you're very confident you'll get it right on the first try, it's best to get a bit of experience in the problem domain first.


I think microservices can work well, but also agree that it's easier to start with a monolith. It's hard to go out and create perfect microservices. In my experience they emerge from the problem much like how an amoeba gets big and breaks off at natural points.


We were also built from the ground up with microservices and had the exact opposite experience. Faster shipping (more value to end users), more predictability (APIs designed/behaved similarly across functions despite polyglot tech), much less coordination overhead (deployed dozens of times per day with a < 10 dev team, pre-release backends well in advance of the user-facing parts), etc. We had to invest a lot in infrastructure, but that was worth it for many other reasons as well. Dozens of repos is annoying, but not for a technical reason (a lot of SaaS like Bugsnag and GitHub used to charge by project).

The biggest downside is it makes shipping an on-prem version nearly impossible. The infrastructure and the software are so inextricably linked that it is not portable in the least bit.


We ship a microservices-based product on-premises. You can do it, but it's a heck of a lot of work. Plus you don't get any of the benefits microservices might bring in a CD scenario: you can't do CD over an airgap.


One of the ways we managed to get a microservices system on prem (vSphere or Openstack) is to configure it with a system that can handle the whole provision/build/configure/deploy as a single unit: http://bosh.io .. though the learning curve was steep.


BOSH has a famously steep learning curve: http://i.imgur.com/4UpbgJm.png


What was the system? Was this at or involving Pivotal? If not, this would be the first non-Pivotal use of BOSH I've heard of, which would be very exciting!


I think this is where Kubernetes has potential.

You can bundle up those complex dependencies into deployment manifests, or use helm.

It's like a SAAS in a box


Tell me about it. Exactly same situation here. You forgot the fact that they're paying 1k+ for AWS for all these services where if it was a normal application it would've cost around 100 bucks.


If you're doing this then you don't know what you're doing infrastructure wise anyway.


The app I'm working on is an ASP.NET MVC monolith, and it's fantastic.

Yeah it's not new fun tooling, but boy does it feel good to ship features without it being a total pain in the arse.

Also C# is pretty nice.


Well if you migrate to .NET Core (MVC) aka MVC6 it will be new fun tooling.


I suppose there's a difference between the "from the ground up micro-architecture" and "using a micro service or two" that should really differentiate the various "should we use microservices?" discussions.

Several sites I'm aware of have an Facebook-style chat service, which is basically an off the shelf node app on its own. This makes far more sense than trying to build such a thing into their legacy app. It also perfectly describes a very useful micro service, in a very different environment to yours.


> dozens of repos

ha. I wish I had this many repos. We have 1000+ git repos. To be fair, a ton are open sourced and there are reasons why it's done this way, but still.


Sounds like your company built a distributed monolith when they thought they were building microservices.


Similar situation here. Architect's desire: exposing usage and load metrics to autoscale the services.

Reality: Everything runs on two EC2 instances, regardless of load.


I see microservices as responsibility deduplication in a organization-traversal-wide.

De-duplication is a very old concept (differentiates good "sysadmins" from bad ones, since the 90's, and good programmers from bad ones).

Thinking organization-traversal-wide is what is hard for some persons.

Currently working at consulting in a big corp... you get to this problem:

Resource: name FooBar, type int (organization view point)

    app1 name: FooBar  type: int
    app2 name: foo_bar type: int32
    app3 name: Foobar  type: Meta::Foo::Bar
    app4 name: foobar  type: string
    app5 name: fooBar  type: int64
... etc until app 35 and script 192679

Microservices, thinking transversely, solve that. See AWS, GoogleCloud, Azure, etc "resource names" (ARNs, etc), for an example of a simple and great microservice.

Note also, that microservices experts (and I'm not one of those) recommend a monolithic and transactional core architecture, for microservices infrastructure.


> Note also, that microservices experts (and I'm not one of those) recommend a monolithic and transactional core architecture, for microservices infrastructure.

This is a pithy encapsulation of something I've been thinking a lot about recently: bud off a microservice from your transactional core if it is higher leverage to do so. Any good readings you've found on this perspective?

An example that comes to mind is: I might write my core application in Ruby on Rails, but need to perform a specialized, CPU-intensive function (PDF generation). I can delegate that to a microservice, invert a CPU-bound problem into an I/O bound one (from the perspective of the Rails core), and get the job done with less hardware.


> I see microservices as responsibility deduplication in a organization-traversal-wide.

AKA SOA. Why call it microservice ?


The Newman book is worth a read - I'm going through it at the moment and a lot is applicable to SOA as well. Microservices is partially a re-branding exercise to take the massive space SOA covers and talk about a more specific definition.

The main thing is the size of the services, the clue being in the name. Also there's a clearer emphasis on the services being more business related concepts, SOA can often described in more techy service splits rather than business concepts.


In many ways it's just like micro kernels again.

If micro services are the bee's knees, why are you writing them in Eclipse or Emacs? Wouldn't interconnected processes make up a "better" environment? An why are you deploying on something as monolithic as Linux? Shouldn't you out-compete all of these obviously inferior solutions, as there are probably more money in that than whatever web app you are currently building?


That's one thing I've always thought when people claimed microservices followed "the UNIX philosophy". UNIX doesn't even follow that philosophy, it's a monolith! Yet it's also a modular system, which is possible without being either a microkernel or microservices.


Yes. For me, shared state in the Linux kernel is like a shared database between microservices. It makes life way easier, is faster, but you need to be careful you don't introduce subtle bugs.


The kernel is only one piece of a *NIX system. The userland tools are (mostly) a good example of small focused applications that can be combined in interesting ways.


Yeah, that is definitely true. I was looking at it as effectively being a monolith that existed to facilitate that, but that does only apply to the kernel.


The problem with Micro-service is the notion of Micro. Services shouldn't be micro or macro but the right size. "Monolithic app" isn't a bad word, unfortunately some influential people tried hard to make it sound like it was.


Indeed. It's easier to change a monolithic stack while you're still discovering the actual architectural requirements of the particular problem you're solving.

As you get a better understanding of how you need to handle things, where the performance bottlenecks are, etc., you can start breaking out pieces that would benefit from being isolated.

It's extremely unlikely that in the short term (first year(s)) of the application being used that the engineering would benefit from a micro services architecture.


That sounds like the obvious approach. Premature optimization and all that.


> It just so happens that the portion of the community is the one most looked up to by the rest of the community

It certainly is the most vocal.


I'm a little long in the tooth so aren't as up to date with every new fangled technique to land in IT. Some of you may find this anecdote interesting and somewhat pertinent. Many years ago the electric utility I worked at had a home-grown set of batch-run Pro-C and PL/SQL programs that ran various metrology operations on large volumes of meter data. These things were interdependent, ran single-threaded and created a real "peak-CPU-demand" problem for our compute hardware (the irony was not lost). Our industry was facing an explosion in data due to the switch to smart metering. What to do?

Our apps all depended on an Oracle DB. Oracle had recently introduced Advanced Queuing. So I figured I'd de-batch and decouple these things using AQ. Every program (C++) was broken into "atomic", stateless business tasks. Every task was fed by a "task queue". Tasks would take a work-item off a queue, do their thing and depending on the outcome, would look up a destination queue (destinations could only be "business state" queues; task queues could only be subscribed to state queues (topics)), dropping the task outcome onto the state queue. Being stateless and callback driven by AQ, we could run these things together and ramp them up and down as demand required.

The overall structure and dependency of the various tasks was externalised through the data-driven queue network. The resulting solution was far more maintainable, provided "free" user-exits (by virtue of being able to plumb new tasks to existing "business state" queues), and was eminently horizontally scalable. In hindsight this was definitely not state of the art. But we were a pretty conservative business with a bunch of pretty unworldly C and PL/SQL programmers. None of us had used Java at that point. But with this approach were able to cope with a massive increase in data volume and make use of all our expensive Sun cores most of the time.

No Java, no REST, no HTML, no SOAP. But we called these queue micro services :-)


Yes, thank you.

I've done little heavy lifting with Oracle, but the general pattern you describe has been my go-to methodology for north of 20 years now. I've come to call it 'message oriented programming'. But it's just one of many ways to embrace the benefits of loose coupling.


What would happen to the system when Oracle rolled out a new version or patch?


Haha. We wished we got patches. This was back in the day when the prevailing mantra was, "Patches? We don't need no stinking patches!" (Though we did convince the DBAs to apply a necessary AQ related patch. Rare.) Having said that, Oracle were pretty good with backwards compatibility regarding their DBs. There was talk of this thing called RAC in the next version. What a dream! That's what I would have gone with to achieve "zero" downtime upgrades. Never got the chance. We used very small patch windows where all boundary processes would stop.


From personal experience Microservices enforce a clear interface and isolation pattern. This is achievable many ways, but having discrete deployed code makes it very hard to violate rather than being disciplined.

Licensing costs can go drastically up as most modern licensing is node/core based. As can deployment procedures get more complicated.

I would love to understand how this article believes that the modules in a monolithic system can be scaled horizontally if they are actually a single code base in a single system. Either the system isn't monolithic, or it they have never really done it. Sticking a load balancer in front of a micro service and scaling based on measured load requires tools and technologies, but is very scalable. It also allows you to do rolling deployments of draining/rotate out/update/rotate in that allows you to get near no planned downtime.

Distributed transactions are the devil, but you don't need to do them in a microservice design. It requires design work on the front end to clarify what the system of record is, but if each service has a domain it controls, and all other services treat it as the truth, it's rather simple. I say this having researched doing payment transactions across geographically diverse colo's and we treated that as a sharding/replication/routing issue very successfully.

Ninja edit: Starting with a microservice design is most likely overkill for a lot of systems, but either way, clear interface/boundaries in your system are good and healthy


Why would you not be able to scale a monolith? You can apply the same principles to it: "[Stick] a load balancer in front of a micro service and scaling based on measured load"?

A microservices allows you to scale up very particular components of an architecture, but there is nothing stopping a monolith from being horizontally scaled in just the same way. In AWS, I would make the monolith deployed with an AMI in an auto-scaling group with a load balancer in front.


Yeah, scaling app code is generally a solved problem: shared nothing with some load balancers. It doesn't work for every problem, but the above has been standard in my circles for at least a decade by now.

Databases are trickier though.


Multiple instances with shared nothing is the opposite of a monolith, by definition.


Not if they're all running the same code.


OK, I looked up the definition of monolithic application in a couple of places, and the meaning is not quite what I thought it was. I thought it was 'one app does everything for everybody' but it's more like 'one app does everything for somebody', so the horizontal scaling applies, and my previous comment is wrong.

It's a shame that a 'monolith' application doesn't just mean a genuine singleton, though, as that would be the perfect name for it. A bank of load-balanced monoliths should be a polylith.


I'm not sure if it's a useful distinction, because when you think you're building a 'monolithic' app you might actually be building a 'polylithic' app. Beyond that, in a lot of instances you have to make very few tradeoffs to go from 'monolith' to 'polylith'.

By your definition, most rails/django apps are probably polyliths.


Author here - Came to reply with basically what you just said.

Thanks for giving it a read!


Jesus christ. "Stick a load balancer in front of it and scale horizontally" works _precisely_ as well for a monolithic app as it does for a microservice. Which is to say, it might work, to some degree, to some total load, for some systems, and it might not for others, and will eventually break down at high enough load for pretty much anything that's not a pure function (e.g. you could scale a RESTful javascript linter horizontally presumably forever with more load balancers and more api servers, but your chat system is going to get more complex)


At work I built and maintain a large codebase that deploys to multiple servers to perform multiple tasks. We also license software.

The answer to your question is simply libraries and build targets. My monolith is mostly shared code, with unique functionality at the fringes, but it all builds into a single deployable jar, minus the licensed libraries which are special cased.

I'm a huge fan of SBT, despite its dwarf fortress like learning curve.


> dwarf fortress like learning curve

Good way to scare me off ever attempting to learn something, haha


This made me chuckle a bit. Also a huge sbt fan, but I see were the gate comes from (the learning curve)... :)


Author here - your ninja edit is one of the core premises of the entire post.

Some other folks have addressed the scalability questions you raised, but i'd add - I am in no way advocating for monoliths as a better approach. Rather, both have tradeoffs you need to think through before adopting.

Thanks for taking the time read.


Scaling is comparatively easy when compared to generating the amount of usage that actually requires scale. So it's pretty much a waste to think too much about it in the beginning (note that I am not granting permission to be sloppy).


You could achieve better horizontal scalability by making the monolithic application work in a distributed fashion. This would be easier with some platforms than others. Actor based systems in particular tend to be easier to do this with.


My approach is to design like microservices and develop like a monolith. Thinking about microservices will force you to define module, their boundary and interfaces. A monolith will simplify deployment, refactoring. Once your code matures, you'll know if any microservice has to be taken out and deployed seperately.


My conclusion as well. In my mind, I call them 'air-gapped modules'.

https://www.linkedin.com/pulse/maintainable-software-archite...


I agree completely. A monolith with service objects has been a boon for our productivity and made it quite easy to extract those pieces of our architecture that benefit the most from running on their own nodes.


This is definitely one of the main takeaways that I wanted to get across.


This is what all experience indicates is correct.


Exactly. This is great advice.


Instead of microservices, I split my projects in tons of libraries and think of them as products, enforcing a well thought of and consistent api (usage api, not http one). I call that an atomized monolith.

I get the cool things about microservices: properly isolated functionalities, ability to assign a team on it, simplicity of code and considering each feature as important, not just "that thing in the codebase".

But it also have all the good parts of monolith: easy deployment and local setup, aggregation made easy, and ability to run integration tests.

For my rails projects, geminabox was of great use for me to achieve this, as it allowed me to host private gems. Lately, I've done a lot of golang, and was surprised to see how it's a natural pattern with go packages.

Only hurting part for ruby projects: keeping dependencies up to date in all those libs (since they all have their test suite, it means that I at least have to update them for test dependencies). To solve this, I've built some tooling that will update all my project automatically and create merge requests for them, running from a cron task.


Oh btw, forgot to mention this: for rails projects, mastering rails templates and thor is a must have, too, in atomized monoliths. It allows to quickly build rails engines with all the project setup made instantly. You can even build complexe templates that ask for input to toggle features or provide variable values. Yeah, I love tooling :)

http://guides.rubyonrails.org/rails_application_templates.ht...


> Instead of microservices, I split my projects in tons of libraries and think of them as products, enforcing a well thought of and consistent api (usage api, not http one). I call that an atomized monolith.

There's already a term for that: modularity.


A word which totally not carries lessons learnt from microservices vs monoliths. But using that name is fine by me, I'm not trying to define other people world, just sharing mine.


I think it does carry those lessons, but most languages are incredibly bad at enforcing modularity. Monoliths make it far too easy to cheat, and dynamically typed languages are typically worse here.

Microservices then just forces on you the modularity your language should have already given you.



I think this is a good approach. Many popular libraries put API design as their top priority, sometimes at the cost of clean code. For example, RamdaJS has this principle of "API is king": "We aim for an implementation both clean and elegant, but the API is king. We sacrifice a great deal of implementation elegance for even a slightly cleaner API.". Imo, good "monolith atomization" requires you to do this kind of tradeoffs, and this can be though.


A interesting article with some good points. I think the important takeaway is understanding that monoliths are probably better for smaller companies, with less total code, and fewer total engineers. At small scales, the "costs" of microservices (network overhead, distributed transaction management, RPC complexity, dev-environment complexity) outweigh any benefits. A monolith lets you develop quickly, pivot, easily build cross-domain features, and is more efficient up to a point.

That said, I believe there is a point where monoliths begin to break down.

First, It is tough to keep code well structure in a monolith, and eventually things bleed between domains. That means, as mentioned, engineers must understand the entire codebase. This isn't practical for 100k+ LOC codebases. Strict boundaries, in the form of interfaces, limit the scope of code that every engineer must understand. You probably still need gurus who can fathom the entire ecosystem, but a new eng can jump into one service and make changes.

Second, deployment is a mess with any more than a few hundred engineers on a given code base.

Third, it becomes increasingly difficult to incrementally upgrade any part of your tech stack in a monolith. Large monoliths have this tendency to run on 3-year-old releases of everything. This has performance and security implications. It also becomes difficult to changes components within your monolith without versioned interfaces.

Fourth, failure isolation is much harder in a monolith. If any portion of code is re-used between components, thats a single point of failure. If your monolith shares DBs or hardware between components, those are also points of common failure. Circuit-breaking or rate-limiting is less intuitive inside of a monolith then between services.

TLDR; start with a monolith, migrate to micro-services when it becomes too painful.


There's some good points here and some I disagree with. One area, though, where I think he misses the point, is:

> Additionally, many of these stories about performance gains are actually touting the benefits of a new language or technology stack entirely, and not just the concept of building out code to live in a microservice. Rewriting an old Ruby on Rails, or Django, or NodeJS app into a language like Scala or Go (two popular choices for a microservice architecture) is going to have a lot of performance improvements inherent to the choice of technology itself.

Languages and tech stacks generally have tradeoffs. Considering Rails vs Go, you could consider the (massively over-simplified) tradeoff to be that rails is better for prototyping and iterating quickly, while Go is better for performance. In an ideal world, you'd write your webapp in Rails, but put the performance-intensive stuff in Go. You'd need to communicate between the two by, say, http. Suddenly you have services.

The performance gains of using a new stack aren't orthogonal to services– they're actually one of the key selling points of services: you can use whatever stack is most appropriate for the task at hand without needing to commit the entire project to it. You can use postgres for the 99% of your app that's CRUDy and, I dunno, cassandra for the 1% where it makes sense. It's difficult (although not impossible) to do that cleanly within a monolith.


One of the downsides of the blog post was that I adapted it from a lightning talk, so it was meant to be a little content-light, but to put ideas in peoples minds around how to think about the tradeoffs.

For example, your point about Go vs Rails is an apt one - I would only add that I made that comparison because...

A: It was originally a golang meetup where I gave the talk B: Go is increasingly becoming popular as a choice people move to off of Rails, for performance sensitive code (Scala being the other popular choice I see), and also for building "microservices" themselves.

I could have, and maybe should have, gone a little more in depth at that part, but the idea wasn't to be fully exhaustive (for better or worse).

But the main takeaway about the performance gains was that the idea of putting the word "micro" in front of something magically made it more performant without appreciating why. It's a response to folks simply parroting information without understanding it.

Thanks for the feedback.


> Go is increasingly becoming popular as a choice people move to off of Rails

If they moved from Rails to Go, these people didn't need Rails at first place given how bare bone Go is. That's the same issue with micro-services, choosing a tech or architecture because hype instead of understanding requirements. Micro-services are something that should be an exception yet it is pushed as a rule by many influential developers, who won't be their to clean up the mess when it becomes obvious it wasn't the right choice.


> You can use postgres for the 99% of your app that's CRUDy and, I dunno, cassandra for the 1% where it makes sense. It's difficult (although not impossible) to do that cleanly within a monolith.

That sounds exactly like the last two apps I have worked on. Django / Flak based with Redis for caching. Suddenly they sound like trendy hybrid micro-service apps.


This could be titled "If you do things wrong it won't be good".

A lot of his examples are of people doing things poorly or incorrectly. I could make the same arguments about object oriented programming my saying it's bad because someone makes every function a public function.

For example, microservices are absolutely more scalable if done correctly with bulkheading and proper fallbacks and backoffs, and proper monitoring, altering, and scaling.

But those things are hard to do and hard to get right.


Hi, author here - thanks for taking the time read it.

You're not wrong in that this article is meant to point out the pitfalls of the approach, and to advocate for understanding before diving into a particular architecture.

It's meant to give people things to consider before deciding breaking things into "microservices" is the right thing for their engineering org at that time.

I attempted to note several times that my intention was not to say "Microservices are bad", but rather "Please don't dive in before you consider the trade offs". It's not as simple as some folks might have you believe, so I felt it was valuable to have a "lessons learned" type retrospective coming from someone who has been involved in both approaches.

Thanks for the feedback.


Got it thanks for clarifying. My suggestion would be to make that a bit more explicit -- I didn't get that impression reading the article.


I think it's more of whether or not you need that scale. If you don't have the resources for it to make sense to optimize everything like that, you're probably wasting your time and making things slower by pursuing microservices.


Yes that's absolutely true. Everyone should not use micro services. In fact most people should not. I should have made that more clear.


That's a key point: it's easier to do things wrong/poorly on microservices.


Raises some good points, but I think the title isn't really correct. It's not "don't use microservices" - it's more about making sure you understand the implications of having a microservice architecture, and making sure it's not an excuse for not writing a monolith (or SOA) properly.


I do wish I had titled it better, as most people have (rightfully) dinged me on this one ;)


I'll go against the majority (or vocal minority?) and say: I like your title, it's good because it attracts attention (I wouldn't read the post if it were a "understand the implications of having a microservice architecture" or whatever others recommend).


“You don’t need to introduce a network boundary as an excuse to write better code”

Absolutely this!

microservices is just decoupling by another name.... and you do not need a network-boundary to enforce this.

Monolithic code can also be nicely decoupled too.


> microservices is just decoupling by another name.... and you do not need a network-boundary to enforce this.

If code is decoupled enough that it can be separated into independent processes communicating over a network, that creates additional freedom into how the components can be deployed to (real or virtual) hardware, which is itself a kind of decoupling.

If you have processes communicating by local-only IPC methods or, even moreso, components operating within the same process, there is a form of tighter coupling than exists when the components are separate networked components.


> If code is decoupled enough that it can be separated into independent processes communicating over a network, that creates additional freedom into how the components can be deployed to (real or virtual) hardware, which is itself a kind of decoupling.

It also introduces additional failure modes.


disagree,

coupling is a logical connection, and has nothing to do with calling-semantics.

Whether a local function call or an RPC, its still the same level of coupling, this is just a difference in the (equivalent of) a link-layer in a network stack.

Adding a network connection is a much more complicated calling-semantic than a function call, many more and different failure modes.


Sounds like you are on the verge of investing Erlang!


Cannot agree more with this based on experience of small startup. Let's say, you want to develop a mobile app and REST API for it hosted somewhere in cloud. There's so much hype about it, so you want to do it "right" (it's right indeed, but for some distant future until which your startup needs to survive). So, the possible solution is to take some common stack, like Spring Cloud, and build a number of microservices with service discovery, config server, OAuth and API gateway.

It appears, it's not so easy: 1. First, documentation as always is not the best, and you'll have to spend time figuring out how to wire together different parts of the system and build various configurations of it for local development, CI build and production. 2. Then, there's debugging issue. Once you've figured out how to work with Docker (good news, it's really easy today), you may want to do some debugging in IDE, but it becomes really painful to launch everything correctly with attached debugger if the services interact with each other. 3. Finally, it's production deployment setup and associated costs. Besides the complexity of deployment, do you really want to pay for 14-20 EC2 instances at the time of the launch of your service and burn the money on 0% CPU activity? It will take months, probably years to get user base sufficient for utilizing this power.

The better approach is to develop single server app with future scalability in mind. You can still have separate components for each part of domain, you just wire them together at packaging time. This server app still can scale in cloud, with correctly set up load balancer and database shared between nodes.

Fortunately, we spent not much time on building microservices (about 1m/w to figure out the costs and benefits) and were able to refactor the code to simpler design, but many developers should not care about them at all at early days of their company.


I for one, feel the same way when someone tells me they are building "microservices" for a small application that they don't ever plan to scale to that levels. IMO, amongst us, there is a wide-spread issue of "Here's the new cool thing - My application/system has to do it". The other day, a friend was talking on and on about setting up a Hadoop cluster for what I saw as a one-time use batch script.


And? Did you try to talk your friend out of it?


Yes, I sent him this to read, http://aadrake.com/command-line-tools-can-be-235x-faster-tha...

and he changed his mind :-)


You use microservices when your project expands beyond the monkeysphere number where everyone knows everyone else.

It allows teams to work in their own world without having to coordinate as much with other teams or people.

Microservices are good for large companies. If you're small you don't need them.


> You use microservices when your project expands beyond the monkeysphere number where everyone knows everyone else.

A layered architecture can give you the same.

Microservices, imo, address organizational/industry deficiencies in the design and evolution of domain models. You're basically trading analytical pain for operational pain. As the top comment in this thread (with the excellent list) concludes, you will need "engineers".

> Microservices are good for large companies.

And this has nothing to do with number of developers. It has to do with inherent complexity of a unified domain model for large organizations. As an analogy, consider microservices as scripting to layered architectures compiled language.


Microservices also enforce boundaries significantly more strongly. A layered monolith can still easily have random people cut across boundaries without you knowing because there are hundreds of engineers all working in the same system.

Large companies don't have problems throwing more engineers at a problem. But they will always have a problem in coordination costs.

Microservices also allow you to use different tech stacks for different purposes more easily.

Maybe use java for one involving hadoop or some GIS library. Use erlang for some message management service, use golang for some simple API service, use nodejs for some frontend web server, etc.

Overall the advantages of microservices come for social reasons, not for a particular technical reason.


Layered systems do not have to be 'monolithical'. Note that we're both at this moment using layered systems to have this conversation.

> A layered monolith can still easily have random people cut across boundaries without you knowing because there are hundreds of engineers all working in the same system.

I appreciated your final word regarding "social reasons" and I think we're in strong agreement in that regard.

In the final analysis, it seems accurate to say that Microservices approach permits runtime operational [micro] payments towards organizational and analytical debt [1].

The hypothetical system(/straw man?:) you posit above is indicative of organizational, not architectural, failure/deficiency.

[1]: in the 'technical debt' sense.


Microservices embody the idea that there might not be a unified domain model across the organisation.


> there might not be a unified domain model

I agree with this. See my reply to mahyarm c.f. "analytical debt". Keyword here is "might not".

If there exists domain level solutions, incurring the (forever) micropayments (in context of operational complexities) of a rush to embrace microservices is a systemic fail of the technical leadership.


One suggestion I would make if you are going to use microservices is to consider using gRPC rather than REST. You can save yourself a lot of the hassle involved in the communication that way AND make things quite a bit faster.


like all things, one size doesn't fit all. not everything is a nail regardless of how shiny the hammer is. Having said that when the situation is right, microservices are great.

some of us have been through this all before with soa or in my case with com. Each individual component is simpler but the documentation between the components becomes absolutely vital.

we ended up keeping a copies of the interfaces in a central location (with documentation of all changes per version) so that everyone would know how to talk to all the other systems.

and don't think that the interfaces won't change. they will. and often across many systems/components. like a ripple.


The problem is to define the scope of each service. And it is still possible to create spaghetti out of how the services interact and how coupled they are with each other.

If done poorly it is like trading one problem with another problem.


Yes, I have definitely experienced this. At the company I work at we have dozens of super tiny services that could have been replaced with a single class, or module in a code-base.

Each of the dozens of microservices gets it's very own dedicated AWS load balancer, RDS instance, and Auto-Scaling Group in multiple regions. Just the infrastructure management alone is monumental.

Edit: punctuation.


As someone working with this setup right now, coming from what is fondly referred to around here as the God-Monolith of our 1.0 version, I couldn't disagree more....

But as always, this is an artform, writing and designing, not laying down pavement.

There's no "right" way, and any blanket statement about anything is false.

Don't use microservices where they don't make sense, make educated decisions, and choose the best option for your situation.

It made sense in our situation, because all our services have very very very specific rules and boundaries and there's no overlap anywhere.


> Fallacy #5: Better for Scalability

> However, it’s incorrect to say that you can only do this with something like a microservice. Monolithic applications work with this approach as well. You can create logical clusters of your monolith which only handle a certain subset of your traffic. For example, inbound API requests, your dashboard front end, and your background jobs servers might all share the same codebase, but you don’t need to handle all 3 subsets of work on every box.

This makes little to no sense to me, and feel like we're bending the definition of "monolith" to mean "microservice" so that we can tick the bullet point. How, exactly, do I achieve this, when my code is mashed together and all running together?

I have a monolithic app today: an internal website, which is so small that it could be served (ignoring that this would make it a SPoF) from a single machine. But it's so closely bound to the rest of the system, it is stuck alongside the main API. So, it gets deployed everywhere.

If it were discrete enough that I could run and scale that internal service separately, I wouldn't be calling it a monolith. At that point, they're separate executables, and scalable independently — that's practically the definition of microservice. And I can't do this if (where they need to) they don't talk over the network (one of the earlier bullet points).


If you could separate the inbound traffic to either the website or the API, then you could do this. You'd need something in front of the code you're deploying though

My team has a 500k monolith written in java 1.6. I don't really want to invest in fixing it, I'm migrating stuff to the new system. So a way to keep the old one going risk-free is to create three load balancer pools, and have apache send some traffic to the three based on URL pattern

* /users goes to pool one * /dashboards goes to pool two everything else goes to pool three

That guarantees that /users and /dashboards can be kept to certain level of performance - by adding more machines, not by diving into the code and trying to fix stuff.

The benefit is that its the same deployable in all cases, so its very easy to push.


Stabby! Several thumbs up to the point #1, that interface boundaries needn't be coincident with service boundaries. In my experience, the benefit of breaking out microservices is the decoupled deployment. A heuristic is, if you have fixes/features that are waiting to be pushed to production until unrelated code is passing/QA'd, you've got a good candidate for a separate service.


Nice compilation of fallacies on micro-services, something that we cannot ignore; but after be reading a little about kubernetes I think much of those problems may be resolved using kubernetes and some of common sense.

https://kubernetesbootcamp.github.io/kubernetes-bootcamp/ind...


@StabbyCutyou, how does Basho's choice of Erlang as the primary language affect it choice. My (naive) understanding is that Erlang forces one to build a single-process system as if it were a multi-process system from Day 1. Does this make the monolith -> microservices switch easier for Erlang systems than it is for others?


Erlang doesn't do incapsulation very well. Even if I divide code to clean OTP Applications, each with it's own public interface, nothing in compiler stops people from referencing internal modules and their exported (i.e. "public") functions. The problem is that many OTP behaviors' internal callbacks need to be exported, so they are exposed publicly.

I guess one need to use xref tool, to find all references outside the OTP application.


While this and many other writings about microservices are largely concerned with network-based environments, there exists another microservice exemplar specific to the JVM world:

OSGi[0][1]

I mention it mostly to assit those wanting to explore the concept of microservices itself, as opposed to assuming a network transport is always involved. Being JVM specific, "kicking the tires" on it naturally requires that environment. Perhaps, though, some of the writings discussing it would be of benefit to those using other tech stacks.

Of course, OSGi does not preclude distributed processing (and often is employed for such).

0 - https://www.osgi.org/

1 - http://www.theserverside.com/news/1363825/OSGi-for-Beginners


It doesn't have to be messy. I've worked in monoliths that are a complete disaster. I've worked in micro-architectures that are a complete disaster. It's the same kinds of people and management practices making these disasters.

I will say the only clean systems I've worked in have been microservice oriented. All monolithic systems I've worked on never scaled properly and always had bugs with 1000 function deep stacktraces.

I've talked to people who have worked in excellent monoliths (rails and django). I know they exist.

Moral is: do it right and have good development practices.


I heard about microservices about a year ago, and now it said the hype has ended before I even noticed? Admittedly I'm not in the loop, and it's hard to track all the trends from outside.


You need almost none of that scary list to start building microservices. Lambda functions can be created in minutes, even in UI console. And they have almost everything from that scary list by default.

Lots of people are still in denial regarding microservices...


My thought is that this comparison between a monolithic code base vs a microservices code base is a bit subjective. If you're starting out chances are your code base hasn't even gotten to the level of being monolithic. So those thinking about how they're going to architect their platform may begin to think that a microservice setup could help for future changes to their code. It really depends on each team, their background, and how they want to think about their platform in the future. To list out the pros and cons of both to draw a conclusion that one is better than the other is certainly setting a bias that I believe to be a bit unfair. Just look at Netflix and their container services. It's a platform adopted by a ton of companies including Nike. So for some a microservices approach makes a lot of sense.


You have a good point in that in the early stages, any app is likely not to be a "monolith", but it's less about size/LOC and more about the design ethic of the architecture itself.

If you build your codebase internally with service level abstractions in mind, you can gain a lot of benefit without the cost of the network or the additional errors it can introduce.

Thanks for reading!


I guess, in my head it's also eliminating a lot of code bloat using microservices. Just my opinion though.


Why would someone push for those "5 thruths"? The point of microservices is to ease Ops life, so that deploying is less of a "big bang"-like event and more geared towards incremental and local evolutions.



If your product or project is not or has services, then it does not need such a thing called microservices.


If this person used microservices perhaps their site wouldn't be down right now...


I swear, the HN front page algorithm is easily gamed, this gets a few points quickly and it rises straight to the front page. I don't know if HN is accounting for vote rings but some penalizing should be implemented.


agreed, most of the points this article makes are invalid as well. google has been running the microservices pattern forever and seems to scale fine. for example he mentions that costs are ndoe based but no where in a microservice pattern does it say each service has to run on its on vm


AFAIK the content is curated. So content with a little ammount of upvotes can show up if some admin wants to.


It's not curated. People used to be asked to repost their content if the admins thought it was interesting, but this no longer occurs. (This turned into a system that could automatically repost content considered interesting; I've stopped posting stories as often, so I'm not sure if it still happens.)

Furthermore, there are protections against vote rings. If, for example, someone votes directly on the URL for a story, or if the referrer is often the same, those votes are discarded.

However, you're right that the algorithm has evolved over time. Visit https://news.ycombinator.com/classic to see the previous algorithm in action.


I remember some people complaining about it some time ago, thanks for pointing that out then. :D


The title should be "I've never implemented microservices properly, so you should avoid them."


I appreciate the feedback, but unfortunately I think you've made a bit of a leap in terms of what my point here was.

It isn't that i'm saying "Don't build microservices", but rather "Don't adopt this approach until you understand the tradeoffs involved". I've worked on teams that have done it well, and not done it well. I've worked with large codebases that are well maintained, and poorly maintained. There are tradeoffs that need to be taken into consideration before adopting any major architectural approach.

I will say that upon reflection, the original title could have been reworded a bit to better express this.


I think a lot of people don't implement them properly. And if you're an individual or a small team trying to build something it's usually overkill.


Agree 100% with both of these statements. A general rule seems is to divide them on transactional boundaries and business concerns.


I'm curious what makes you think that. At the end his bio says he has 12 years of experience, and from his blog seems to know a thing or two.


Because scaling a monolith is not a realistic approach. Every single monolith I have worked on has been refactored as domain specific microservices, and in every case it was a resounding success.


"I've never implemented monolith properly, so you should avoid them."


In my experience scaling a monolith just costs more (i.e. hosting costs). Our product, when we started, was quite expensive to run at our high traffic times, but we were eventually able to cut our costs drastically by breaking certain pieces out into services.


Depends on the product, but for many startups those additional costs can be crippling.

Also, I have found it much easier to onboard a new engineer, give them the requirements for a service and let them go at it. I've been using AWS Lambdas to great effect in this way.


Certainly, not everything is a nail.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: