Hacker News new | past | comments | ask | show | jobs | submit login
The Death of Microservice Madness in 2018 (dwmkerr.com)
993 points by Sandman on Jan 21, 2018 | hide | past | favorite | 441 comments



I think "microservices" is so appealing because so many Developers love the idea of tearing down the "old" (written >12 months ago), "crusty" (using a language they don't like/isn't in vogue) and "bloated" (using a pattern/model they don't agree with) "monolith" and turning it into a swarm of microservices.

As an Infrastructure guy, the pattern I've seen time and time again is Developers thinking the previous generation had no idea what they were doing and they'll do it way better. They usually nail the first 80%, then hit a new edge case not well handled by their architecture/model (but was by the old system) and/or start adding swathes of new features during the rewrite.

In my opinion, only the extremely good developers seem to comprehend that they are almost always writing what will be considered the "technical debt" of 5 years from now when paradigms shift again.


I call this the painting problem. Painting the walls of a room seems easy to an amateur: You just buy a few gallons at Home Depot and slap it on. But a professional knows that prep, trim, and cleanup are 80% of the job and they take skill. Anybody can slap paint onto the middle of a wall. What's difficult and time-consuming are making the edges sharp and keeping paint off the damn carpet.


So you are saying edge and corner cases are the most difficult?


They are also most common: «the high-dimensional unit hypercube can be said to consist almost entirely of the "corners" of the hypercube, with almost no "middle".» https://en.wikipedia.org/wiki/Curse_of_dimensionality#Distan...


In fault-tolerant distributed systems, yes; undoubtedly.


Great pun!


Not sure if it's a pun, or the literal etymology of the phrase.


Keep the stuff from spilling all over the place is also between the most difficult things.


Keeping your buckets sorted is definitely important


This is Parkinson's Law of Triviality. https://en.wikipedia.org/wiki/Law_of_triviality


It is frightening to see bikeshedding in practice. I used to work for a regional transportation authority that would administer hundreds of millions of dollars in federal/state/local road projects. Local leaders would monthly come in and sit at a huge round table (40-50 cities/counties/regional leaders) and vote on projects. 800 million dollar projects would sail through with almost no questions in the first five minutes, but then the leaders would spend the next two hours of the meeting debating about the $100,000 pilot project to help the homeless.


I once sat in the board of a student organisation management a $3mil investment fund. Which was rapidly declining as funds where being withdrawn to cover budget deficits. At the same meeting where no-one could explain, and no-one seemed to care about a 100,000$ transaction listed as “assorted expenses”, people spend 1,5 hours debating whether the coffee-shop should keep buying newspapers for 200$ a year...


I think it's also kind of an effect that humans don't have an intuitive grasp of big numbers. Add to that the effect that humans think more about what they can imagine. You can kind imagine spending $200 on newspaper in your everyday life. You seldom make $100,000 decisions. Big one-time transactiosn seem intuitively less impactful than many smaller ones.

As Programmers we deal with numbers a lot more often, so this effect is minimized. But still there.


I'm skeptical 800 million dollar projects would pass through local government at that level in 5 minutes, unless everyone who voted in favor had already been canvassed or taken part in intense debates at other meetings or were rubber stamping a special committee decision etc. What regional authority was it where 800 million dollar projects happened so regularly?


This was a transportation authority for a large metro region. CTRMA (Central Texas Regional Mobility Authority) is an example that works with large projects like this all the time, though it wasn’t the one I worked for.


Why? Do some of them have something against helping the homeless on principle?


In that case it was a jobs program (to help construct roads) and the rationale against it was that they were extending beyond their purview of what the commission was meant to do.


Isn't that sort of rational then?


Definitely rational

I think (at least, I hope) what OP was trying to get at was the relative irrationality of something 1/8000th the cost of the very expensive item taking a very long time to debate through (especially when $100,000 is most probably far less than many of those board members earn as a salary every year)


My experience is a careful amateur painter is 1000% better than an average professional. Professionals are certainly a lot faster, but if you look carefully at their work it is in the main very shoddy.

If you want a good result don’t skimp on the tools. Buy good quality brushes, rollers, filler, throws and paint. Also buy an edger to cut in the walls and ceilings. One final tip buy some of the disposable plastic liners for the roller tray so you don’t have to spend time washing out the tray at the end of the day.


I think there are two things at work here.

1 - there are very good professionals, but they aren't cheap and have all the work they need

2 - an amateur can always decide to take economically irrational amounts of time on a project; a professional can't.

So the result is that as a careful amateur you can end up with a job you probably wouldn't be able to convince yourself to pay for.

We can probably all agree that the worst case is the careless amateur....


Here where I live (Australia) it is near impossible to find good professional tradesmen. While it might be irrational on a financial level to spend my time painting or other trades, I get the quality I want by doing it myself.

Even on the financial level tradesmen here in Australia are so overpaid (compared to everyone else) that it makes sense even for me to do it myself. My house was painted by "professionals" just before we moved in (purple??) and it cost the previous owner $20,000. My wife and I repainted it white (this required 5 coats of paint) and it took us two weeks including the time I had to spend getting the purple paint off the windows and floors and patching all the holes that had been painted over.

Tip. If you are selling your home don't paint it some unusual color. I managed to buy my home $200K under its actual market value and quite a bit of this was due to the crazy way the place has been painted.


Next time two (or at most three) coats - the first coverup coat should be black, since the purple (or red or whatever) won't show through black.


I think black to white is going to take more than two coats of white.

I have used gray before to go from red to yellow as it seems to work in fewer steps.


> economically irrational

I am not sure that description is apt. Underdelivering is only rational for the painter because the client won't be on the market long enough to gather sufficient information. The client is getting cheated, and it creates a lemons market, the fact that it is a Nash equilibrium does not make avoiding the entire thing irrational.

And the entire thing has parallels on software development...


I meant it in a similar sense to that in which some hobbies are often irrational if you consider them only for market value. I probably could have chosen a better term.

An amateur painter will often spend far more time than any reasonable estimation of the market value of that job would justify. Part of this is lack of efficiency and experience, but another part is doing things with sharply diminishing returns. For example, you might apply expensive techniques to inexpensive materials, in a way that would not make for a viable business.

Put it another way - amateurs can easily arrive at a finished job (of painting, in this case) that they would never be able to convince themselves to pay market rate for. This is irrational in certain restricted senses.

For what it's worth I don't believe that "lemons market" is accurate for painters in general, but I'm guessing there is a segment of it that meets your description.


> amateurs can easily arrive at a finished job (of painting, in this case) that they would never be able to convince themselves to pay market rate for.

If you're looking at it from a economic standpoint, to be considered irrational, the tradeoff between the value of time vs the cost of hiring a professional would have to assume that the value of the time is greater than the value of the professional.

I always hear this compared to "what is your hourly rate in your job" or "my time is worth more than that", but I think for most people this just isn't a fair comparison. Just because I can spend 10 hours painting my room and I make $x/hour and it would cost <$x to pay a professional, does not mean I would be able to actually generate an _incremental_ $x per hour by not painting and hiring the professional.

For most people on a salary (where your pay is fixed no matter how much time you put in), their time outside of the job is, in a very real sense from an economic standpoint, valueless, and it would be perfectly rational to spend that time yourself, no matter how cheaply a professional could do it.


   If you're looking at it from a economic standpoint, to be considered irrational, the tradeoff between the value of time vs the cost of hiring a professional would have to assume that the value of the time is greater than the value of the professional.
That’s an oversimplification, and not the argument I was attempting.

For what it’s worth, I don’t think we are really disagreeing much.


Not at all, merely pointing out an aspect of this trade-off that I often hear described as irrational based on the exact oversimplification that you pointed out. Consider my comment an addendum to yours. :-)


Not sure where you're from but in Germany there are industry standards professionals have to adhere to. Sure, there are sloppy professionals like in any other job too but if they adhere to the industry standards, the result will be pretty much what you expect and they're not allowed to take shortcuts that result in lower quality work.

On the other hand, coming from a country that takes its trades extremely serious, I was shocked about what I observed every time I visited the UK. It seems the expectations vary drastically between different countries (though I'm sure there are quality professionals in the UK too).


It doesn't make too much sense to do quality work in the UK, on the whole.

House price / rent value is basically dependent on location. Money spent on fitting is generally wasted.

There are a lot of great craftsmen in the UK, but they tend to work on a subset of jobs - restoration, passion projects, really high-end stuff.

PS: This is also a direct result of ever-rising house prices. If land/house prices are consistently rising, it often makes more sense to let lots stay empty, rather than building (which is always a massive risk). It also makes sense to do any repair or upkeep work as cheaply as possible - since in a rising market, the only way you can lose money is by expensive development costs.


That explains a lot. Thank you.


I am in the land of the shonky tradesman - Australia. Not only are they massively overpaid, they make the UK tradesmen look like gods.


Quality cleaning solution and sand paper so your high quality paint doesnt peel 2 years later.


Yes I agree especially the cleaning part. The kitchen and the rooms close to the kitchen it is critical to clean and clean and clean. The prep should take longer than the painting.

On the sand paper front make sure to spend money there. Cheap sandpaper last 2 nanoseconds and does a terrible job.


That may be a bad analogy, considering painting your own apartment is something a great deal of people do, and often with good success.

My parents (not in any way experts on that field) painted their entire house themselves, except for two rooms that were painted by a professional painter, and the professional painter left much worse corners than my parents. This was the paid-for result (ignore the dark corner at the bottom, that’s caused by the flash): https://i.imgur.com/s1VHV2W.jpg


Analogies don't always have to apply to your parents.


My argument is that painting is a much smaller skill difference between a hobbyist and a mediocre professional.

Actually good software engineering is a much larger skill difference.


>much smaller skill difference between a hobbyist and a mediocre professional. >Actually good software engineering is a much larger skill difference.

Based on my years across various companies, the difference between "hobbyist" and "mediocre professional" developer/programmer is close to nil


Aside from the "sample size of one"-issue: how much time did your parents take, and how much effort and preparation time, and how did that compare to the time and effort the professional needed? It could be a trade-off issue there.

Also, note that this thread started with a comment about how many developers don't appreciate what the actual hard problems in their own line of work are, and how that produces poor results because of that. I expect that to be true of any profession that doesn't have some kind of rigorously enforced standard.


My family painted about half our house ourselves when I was about a pre-teen. My mom just hired a painter to paint her new house. The painter didn't take an appreciably shorter time period than a couple pre-teens did. IIRC, it took us about a week for about 800 square feet worth, and it took the professional 3 weeks for 2400 square feet. And yeah, we did the whole masking-tape, dropcloth, and primer routine as well, and our corners were just as good. (I was gonna say we probably took more cajoling, but given the number of times my mom had to call the painter to make sure he showed up and would have it all done by move-in, I'm not sure that's true.)

Painting isn't a profession with a particularly high return on experience. You hire a painter for comparative-advantage reasons. It may take him just as long as it takes you, and several thousand dollars more, but chances are you can make more than several thousand dollars in the time you would've spent painting your house. (If you can't, you may want to re-think hiring a professional, and instead go into business as a painter yourself...)


When you weigh hiring a professional versus DIY, you have to consider the impact of income taxes, not just what you earn/hr versus pay/hr. You have to pay the pro with post-taxed earnings, and then the pro has to pay tax on what you pay them before they can do anything with it. Whereas, if you DIY, you aren't taxed for it. (Also, many people can't just 'work ore hours' to get paid more).


Have your parents start painting houses every day aND youll find they get far worse at it.


Great analogy. I tried to paint my own flat. Was a disaster.


I've tried to hang sheet rock. How hard could it be? It certainly looks easy. A disaster for me, too.


I'm a die-hard DIYer. I hire out drywall and blueboard/skimcoat work. Two guys who do that work everyday can do in a 6-hour day and ~$800 what it would take me two weeks and $300 in materials and their job looks 4x better than mine.


For us non-english speakers - "sheetrock" == drywall.

I find I can do most such works better than (average) professionals, but it takes a lot of time to learn, prepare, execute (+ sometimes re-execute) and clean. It all depends on when you are satisfied with the result.


I recently came across 'sheetrock' as a term and had to google it...

For UK natives it's actually three steps:

"sheetrock" == drywall == Plasterboard


I have also heard it as "gypsum wallboard", which is certainly most descriptive.

I'm not certain whether the water-resistant, mold-resistant variety typically hung behind tiles in bathrooms is ever referred to as "wetwall" or if they still call it "drywall".

Nor am I aware of whether anyone calls the foil-backed, glass-reinforced, fire-resistant variety "firewall" instead of "drywall".

I wouldn't be surprised if obsolete and inapplicable names still attach to items with the same function. If we ever move to polyethylene film panels sandwiched with phase-change material for our walls, it might still be called "plasterboard" somewhere.


it is called dry wall because it is installed while already dry. Unlike the old technique of lath and plaster where you build up the wall with multiple layers of wet plaster https://en.wikipedia.org/wiki/Lath_and_plaster


Sheetrock is USG's trade name for a line of drywall products. Sounds like it's going the way of Kleenex, Band-aid, or Hoover.


I used to work for a home theater company. We were regularly working on newer mansion in the suburbs. If you ever want to see some dudes that work faster than fast, go hang out with a drywall crew.

I remember showing up on a site, and the drywall guys had just started carting in a ton of drywall. By the time we left in the afternoon, they had cut, hung, taped and mudded over 3,000 sq feet of home, it was insane.


I feel you.

I managed to paint my walls, but it took me much more time than expected and my flat was chaos for some weeks thereafter.


A real hacker can fix is car, paint that wall, and can code too.

There is no problem painting for me and many others who know how to use their hand.

Is amazing when you talk to people and they are like: Did you do that? How do you know how to do that?

Learn, try and you can do anything. As people did at every step.


The hardest part is knowing when it is a better use of your time to hire someone else to do it for you.

You can just follow all the same steps as you usually do:

1. Define the problem.

2. Determine the desired outcome.

3. Measure the existing state.

4. Note the foreseeable failure modes.

5. Plot the path from existing state to desired outcome as a series of reasonable steps, avoiding the failure modes.

6. Recursively analyze the steps, breaking them down into smaller steps if necessary.

7. Rework your plan as additional failure modes become evident.

In my experience, yes, you can do anything, as long as you ignore costs. You can't, for instance, justify buying a specialized tool to finish just one job. The hardest part is really step 2.


>You can't, for instance, justify buying a specialized tool to finish just one job

You must be new to DIY :)


I really think microservices are a process win not a technical win. It's easier and better to have 5 teams of 10 managing 5 services. Then having one team of 50 managing a one super service.

When I see a team of 7 deciding to go with microservices for a new project I know they're gonna be in for a world of unnecessary pain.


I agree that it's _easier_ for teams to have their own little fiefdoms, but not necessarily _better_. Shipping the org-chart is often a symptom of a leadership problem. When natural service boundaries exist, good leadership may choose to ship the org-chart, but too often extrinsic factors such as the arrangement of dev's desks dictates the architecture.


> ...but too often extrinsic factors such as the arrangement of dev's desks dictates the architecture.

hahaha

Do you know if there are some well chronicled cases of this happening? I find it very believable, but you know, would love to read something "actual".


I believe this is a paraphrase of Conway's law. The Wikipedia article lists three studies in favor of it, so there's that.

https://en.wikipedia.org/wiki/Conway%27s_law


[Disclaimer: I've never worked for MS or Amazon, working solely off of reported info here...]

A kinda-random example off the top of my head of "shipping the org chart" would be the historical gaps between Windows, Development, and Office at Microsoft. Ie Office is getting a new ribbon, but no development can't supply those icons or any components because they're an "office thing", or the internal API/VBA battles along the same lines.

On the opposite side, as reported in the press, Amazon has used this effect to create manageable teams and build up their own SOA: the two-pizza rule for teams means that teams can only really make targeted self-contained services. In this case Amazon worked backwards and re-structured their teams so that 'shipping their org chart' created the desired architecture.


My favourite Microsoft 'ship the org chart' example is their app store.

Microsoft has an app store built into every Windows 10 computer worldwide. And of course, you can not download Microsoft Office from it. However, it does helpfully tell you to get Office by heading to 'MicrosoftStore.com'.

It's all sort of head scratching. A normal person might ask a lot of totally reasonable questions about this, like:

- Why does Microsoft's App Store not have Microsoft's own Apps in it? Office isn't the only missing app, Visual Studio is missing too (even the free 'Code' electron editor).

- Why does the phrase 'Microsoft Store' refer to something 100% different (in products and functions) than 'MicrosoftStore.com'?

- Why does the Office team have their own app updating utility, when there's already supposed to be one 'blessed' place for App Updates inside the Store? (Same question for Visual Studio Code).

---

Anyway, I know a lot of the above is org chart related, or enterprise needs / backwards compatibility related. But they are all reasonable questions despite that.

And stuff like this is part of the reason why reasonable people still fall for phishing schemes. The above sounded to me like some weird popup advertisement trick, until I saw it first hand.


Definitely a broken "app store" vs, say, Apple's model - where you can get any Apple software you want from it for your Apple devices.

(Isn't it the only way to get Apple software for your Apple devices, even?)


Just go work somewhere that is big enough for that to happen. Usually large tech cos with the amount of devs larger than dunbars number by about 2x or greater.


Isn't the idea to decide the architecture then update the org-chart to reflect this?

Not that this actually happens-or would be a particularly good idea if it did for other reasons (like breaking up teams etc).


Happens where I work quite frequently, and yes it does have the effects you describe


The key to microservices is a framework and tooling around them to make them work for you. Release management, AuthN/AuthZ, compilation, composition, service lookup, etc. should all be "out-of-the-box" before microservices should ever be considered. Otherwise the O(n) gains you get in modularity turn into O(n) FML.


Would love to hear some peoples' thoughts on good contenders in each of these areas. What are some good boxes to get these out of in 2018?


Crossbar.io is currently an interesting approach IMO.

It does have a single point of failure that is also a bottleneck (the router) and it's a lot of work to load balance / failover it.

But other than that, it provides a fantastic way to make microservices:

- anything is a client of the crossbar router. Your server code is a client. The db can be a client. The web page JS code is a client. And they all talk to each others.

- A client can expose any function to be called remotely by giving it a name. Another client can call this function by simply providing the name. Clients don't need to know each others. Routed RPC completly decoupled the clients from each others.

- A client can subscribe to a topic at any time and be notified when anothe client publishes a message on that topic. Quick, easy and powerful PUB/SUB with wildcard.

- A client can be written in JS (node and browser), Python, C#, Java, PHP, etc. Clients from other languages can talk to each others transparently. They all just receive JSON/msgpack/whatever and error are propagated and turn into the native language error handling mechanisme transparently.

- The API is pretty simple. If you tried SOAP or CORBA before, this is way, way, way simpler. It feels more like using redis.

- It uses websockets, which mean it works anywhere HTTP works. Including a web page or behind a NAT or with TLS.

- Everything is async. The router handles 6000 msg / s to 1000 clients on a small Raspberry Pi.

- The router can act as a process manager and even a static / wsgi web server.

- you can load balance any clients, hot swap them, ban them, get meta event about the whole system, etc.

- you can asign clients permissions, authentications, etc. Sessions are attached to each connexion, so clients knows what other clients can do.


Release Management. It's not what the cool kids are doing but Microsoft's Visual Studio Team Services is the easiest CI/CD solution I've ever used.

Service Lookup - Hashicorp's Consul.

Authorization/Authentication - I've played with Hashicorp's Vault. It seems overly complicated but the advantage is that it doesn't tie you to a single solution and it can use almost anything as a back end. I haven't had to solve that problem yet.


Would like to echo this request.

Had to implement a micro-service architecture in python about a year ago and was jealous of my Java colleagues who (so I heard) have great ecosystem for enterprise service discovery, messaging etc.


Yes, we do. The ecosystem is great and microservice architecture may be manageable even in small teams, but it's always a process that defines the success. From my own experience, the biggest technical problem of microservices in Java can be the performance of the whole system (in a monolith there's much less logging and state management, no network latencies and mismatching parallelism settings). Discovery, auth, messaging, configuration management etc are solved.


> When I see a team of 7 deciding to go with microservices for a new project I know they're gonna be in for a world of unnecessary pain.

Faced this a couple of years ago, and I was the lone dissenting voice suggesting this was not going to go well. Then I learned that "microservice" in reality just meant everything was going to be one nodejs process running endpoints, with ios and android clients hitting it, which... didn't really fit my understanding of "microservice"; that's just "service".


If you are slightly careful and put this service behind a load-balancer/reverse-proxy you can take any given set of end points and turn them into a micro-service when you need to. Sounds perfect for a small team.


you could. in this particular case, if memory serves, we had an existing set of endpoints functioning, but people deemed it in need of rewrite to node, because 'microservice'. Instead of putting the code we had behind proxy then migrating, we had to start over. because 'scale' and 'microservice'. and 'lambda'.

Also, the same team spent several hours in meetings deciding whether or not to allow email signup, or facebook signup, or both, or neither, in the mobile app. Then had the same discussions/arguments a few weeks later when a couple new people joined the team.

I realize I sound a bit bitter. I got pushback because I'd used the 'microservice api' in a "we don't like that" language. consuming the api (which I'd understood to be part of the reason of having a central API vs just hitting db tables directly) by anything that wasn't also node was outside the groupthink, and caused problems.

I left the project.

They've got their microservice architecture, but no userbase (yet?) to be concerned about scaling issues.

I understand it's reasonable to be concerned about potential scaling problems, the team/project spent far too much time chasing architectural perfection (and really... 'shiny new stuff') vs executing a marketing plan. It's easier for a group that is tech-folk-heavy to focus on that; I get it. But it didn't solve any problems at hand. But when the mythical "2 million users in an hour" problem happens, it'll probably hold up, unless it doesn't.


I'm the dev lead for a largish company with a small development shop 4-9 people (contractors come and go). I went for a microservice like hub and spoke model where a bunch of small services integrate with a central Mongo database where all of the CRUD is managed via an API and validation is done via the API. Also cross cutting concerns like configuration (a wrapper around Consul), logging (structured logging via Serilog), and job scheduling (Nomad) is done via a common package.

I chose this approach because the developers who were already there were relatively new to C#, and I knew we were going to have to ramp up contractors relatively fast.

Our dev ops process revolves around creating build and release processes by simply cloning an existing build and release pipeline in Visual Studio Team Services - the hosted version of Team Foundation Services - and changing a variable. Every service is a separate repo. Each dev is responsible for releasing their own service.

The advantages:

1. All green field development for a new dev. They always start with an empty repo when creating a new service.

2. Maintenance is easier. You know going in all you have to do is use a few documented Postman calls to run your program if you need to make changes. Also, it's easy to see what the program does and if you make a mistake, it doesn't affect too many other people if you keep the interface the same.

3. The release process is fast. Once we get the necessary approvals, we can log on to VSTS from anywhere and press a button.

4. Bad code doesn't infest the entire system. The permanent junior employees are getting better by the month and we are all learning what works and doesn't work as we build out the system. Each service is taking our lessons learned into account. We aren't forced to keep living with bad decisions we made earlier and building on top of it.

A microservice strategy only works if you have the support system around it.

In our case, an easy to use continuous integration, continuous deployment system (VSTS), easy configuration (Consul), service discovery and recovery (Consul with watches), automated unit and integration tests, a method to standardize cross cutting concerns

And finally, Hashicorp's Nomad has been a god send for orchestration. Our "services" are really just a bunch of apps. Nomad works with shell scripts, batch files, Docker containers, and raw executables. It was much easier to set up and configure than kubernetes.


They can manage those teams by writing libraries, no need for microservices.


It's much harder to deploy a library fix to production (producing a new build of integrated app in large team is a big deal), than a microservice.


You'd be surprised. With a microservice you still need to coordinate the release with other consumers of it.


I'm well aware of that. This type of coordination is much easier, than producing a new build in a complex corporate environment. The delivery of microservice is limited in most cases to the change in environment inventory file and integration testing. The delivery of the build requires source code management (you'll need to push the dependency change in the right branch), probably, full application SQA cycle (to make sure there's no side effects, e.g. due to transitive dependency update) and only then - deployment and integration testing.



It shouldn't, but in practice sometimes is. It's not the pushing of bits that's hard; it's the ensuring that the library got everywhere it needed to go and that the new release of the other users of that library didn't introduce any bugs.

Imagine you have a library that implements the serialization and deserialization of something in your system (or anything else where the library implements two "halves" of some functionality). You might (or might not, depending on the change) need to push out that library to other apps to effect the fix you're working on. That new proposed library change may be in a queue behind another pending change that's in development, etc.


> it's the ensuring that the library got everywhere it needed to go and that the new release of the other users of that library didn't introduce any bugs.

And guess what, you have to do the same thing with microservices. You must ensure that all its clients are pointing to the correct version, and you must ensure that the new version didn't add any bugs in the clients.

Adaptations for new service versions will also get into the release queue, and can also get blocked by other stuff.


What if you expect the team of 7 to grow to 20 or more.. In a larger context almost any company would hire more developers if they could find them and hire them.


Premature optimisation...

YAGNI is almost always the right approach.


I don't think you should always go with a microservice approach first, but you should always separate different domains/features into well defined modules/projects even within a monolith.

There have been plenty of times where I've taken a project out of a monolith and created a separate shippable package to be consumed by another monolith in a separate repo.

There were also occasions where I had to rip out a feature in a monolithic API that either needed to scale independently or be released independently and then just created a facade in the originally API that proxies the "microservice".


Completely agree.

I mostly tell my people "We aren't going to do that yet - or possibly ever - but keep in mind that we might have to if we ever win the user growth hockey-stick graph lottery, so when choosing between options of how to architect things, prefer very narrow focus that _could_ be pulled apart into microservices more easily if practical"

I've also been known to stand up duplicated instances on a monolith with path routing on the load balancer as a "pretend microservices" scaling or upgrade trick.


Those growth points, and their associated market knowledge and budgetary constraints, would become perfect transition points into a friendlier architecture for the team... Plan for change, moreso than trying to plan against change.

Decomposing and fragmenting an established service into smaller clones of itself should be reasonably straightforward. In most languages with a component story (ie java), library reuse across microservices should make your service architecture orthogonal to your application logic. This kind of approach also eases a lot of pain of cross-service issues.


Jesus, 10 people on a team? What are you doing that you need 10 people to work on it? I manage like 8 services by myself.


Well, I've also worked for places where I'm on call for some service maintained by one person that's built their little impregnable abstraction castle because nobody else had to work on it so they could just yak-shave and bike-shed with their self.

These days I kinda wince at "I run 8 services myself."


Those of us who have been around since the 80s-90s are astonished by the low productivity of today's programmers.

If it takes 1 person to run 1 microservice, we are all doomed.


It's because that 1 micro-service has 300 dependencies.

Higher levels of abstraction makes it easier to get something up and running fast, but at some point you need to be able to look under the hood and understand what's going on, and many programmers today can't do that.

That being said I think the drop in average skill is mostly a product of the growth in the number of programmers. I imagine that if the ability to sculpt a basic statue suddenly became really valuable, the skill level of the average working sculptor would plummet.


> Higher levels of abstraction makes it easier to get something up and running fast

More layers of indirection in a system and more dependencies on external libraries and tooling does not necessarily get you any abstractions. To take a contemporary example, there is no "abstraction" in being driven to use Docker because your dependencies have gotten unmanageable otherwise.


Docker is an abstraction...


No it's not. OS-level virtualization is the abstraction. Docker is a set of tools to manage Linux containers and virtual filesystems. You can argue that libvirt is an abstraction because it does actually work over several different virtualization technologies.


You really think that the productivity of today's programmers is less? What evidence and data do you have for that?


Warning: all anecdotes. While I don't think that productivity has decreased, what was notepad, a compiler, some dlls and a debugger before turned into a thousand little packages, several configuration files, a bunch of servers you have to run on your dev. computer, which is also 10x more capable, yet everything feels so sluggish.

I also feel that ceteris paribus, the meetings got longer, project management tools now consume a lot of input from programmers, and I need to communicate with a lot more people to get something done.


>"ceteris paribus" Latin for "all things equal".


Sounds for me like productivity went down...


I think the complexity of solutions has gone up (especially in webdev), it seems to me perhaps complexity has gone up way further than actual requirements or new features would suggest...

Which seems to end up meaning productivity has gone down when measured by "things end users of websites can do", even though modern FE devs end up creating much more code and html and css than "the old days". (Admittedly, if you include privacy invasion, user tracking, and various other requirements of surveillance capitalism, dev productivity has probably skyrocketed...)


..eh yes producing mindless importing stuff, generating code, tracking garbadge burocratic stuff went up. Just importing energy consuming crap. Not personal 'productivity' imho but 'work simulation' by click, giving 'reason'. 'solutions' for 'no problems' p.e. A subscription modell for automated driving.. If easy money is to be made the crazyness starts...

Complex? We still call a function with a return value on a stackmachine.

Sorry for the negativity.


In the old days if you managed to create a website it was great. Now people compare everything with Google and Facebook. If you just put something together fast people would just laugh at you. So everything takes a lot longer. The effect is that you get to solve less problems.


"even though modern FE devs end up creating much more code and html and css than "the old days""

And yet, most homepages today can't be viewed without javascript. You are correct, for the end user the complexity has absolutely not resulted in better homepages but worse.


On the other hand, I can now easily write web pages that let people query and view the results of large CFD simulations in interactive 3d.


That’s valid, there are times I wish we had some bench depth.


That's nothing. I once ran 20 services before breakfast.


Well, my pre-breakfast routine consists of

    seq 21 | while read port; do
        python2 -m SimpleHTTPServer 80$port &
    done


  seq 21 | while read port; do
    python3 -m http.server 80$port &
  done
FTFY


Why was this downvoted? I think this is a valid argument. There are plenty of small teams or individuals who have been managing a bunch of small services. Before this was called microservices.


I downvoted because without context this just seems like the typical know-it-all statement from someone who has yet to realize how little they really know. There's no attempt to even try to appreciate how their situation may be different, just the default disdain so typical from some professionals incapable of thinking outside of their own perspective.


10 people managing a single "micro service" does raise a red flag to me as well. It may be normal based on the complexity, but my first thought is, why is the codebase require so many people to maintain and add features to? Are they just adding features like mad, or something squirrelly in there where most modifications take a lot of man hours?


How micro can it possibly be if it takes 10 people to deal with it? Seriously.


"Micro" in this case relates to the services scope, not its operational footprint.

So if Netflix has a "User logon" service, and a "payment processing" service, used across their clients clients you might be looking at a couple of "microservice"s with hundreds of related employees. Imagine services for Googles search autocomplete, ML, or analytics...

As the article states the "micro" aspect is mostly in terms of deployment responsibility, freeing those 10-1000 employees from thinking about the totality of Googles/Netflixes' operations before rolling out a patch :)


If "micro" refers to the scope of the service (i.e. very limited feature set), you might still need a team to run it, if the service has to handle a very high volume of transactions or provide very low latency, or both.


The scope of a microservice is commonly a bounded context in DDD. So depending upon the problem you might be talking about a lot of code.


It’s not unreasonable to imagine that any of the examples provided in the source post could require a team larger than 10 people.


Cause it's not very micro?


I think he is literally asking for the other perspective? "I run 8 services myself" can be read as arrogant, OR it can be read as "I do things very differently", which is how I read it initially.


That’s how I meant it, that the advantage of microservices is bite size pieces of work, so 10 people on the same service sounds like they’d step on each other’s toes. I run a bunch of services but they’re each debugged enough to be fairly low maintenance and small enough that the edges are well defined.

It’s 8 or so but it’s possible for me to handle it all. If we add features they’re going to be new services, so adding big features to the services I manage is unlikely. It is more likely that I get a new service on my plate in 6 months time than getting additional members of the engineering team to work on already completed services.

I’m obviously not entirely alone... I’ll ask for help if I need it, and I help out with other people’s stuff too, but I am primarily in charge of them and I am responsible for keeping everything working well.


I think the context is pretty clear: microservices.

If it takes 10 people to manage one service, it is not a microservice by definition. It is more like a 10x-microservice or a macroservice.


It's not the number of people that are micro, it's the scope of the service... So what is the right level of manpower for a scoped service with unspecified operational demands? Unknowable.

The definition isn't small teams, per se, it's a small area of responsibility with singular focus. A lego block instead of duplo. That lends itself very well to small teams, but you could reasonably have 100 people working on a service and call it "micro".

Reason being: if those 100 people weren't working on their scoped 'microservice' they would be part of a much, much, much larger pool working on the shared 'product', 'platform', or 'service' that contains that exact same functionality, only without the clarity/scalability of application boundaries surrounding the individual service components.

That's not to say microservices are ideal, just that the size of "micro" is highly relative to ongoing operations.


While 10 is more than many recommend it's not that much more than Amazon's rule of < 7.


I suppose it depends if those 10 people are purely developers or include the other teams you'd need - monitoring, devops, etc. For a critical, high-performance microservice, I can easily see it needing the involvement of 2+ devs (at least one senior), product owner, project manager, QA, devops, monitoring, etc.


Because it's not a "valid argument", it's snark. The contempt is unnecessary and uninformed.


It’s not snark, it’s also not any argument. It’s genuine incredulity. I seriously cannot imagine the situation where 10 people work on the same minimal function piece of code.


I get both why people downvoted you and why people defended you.

One developer can certainly be responsible for coding many microservices and even maintaining them depending on the scope, number of users, etc.

Sometimes people might even rotate with a core architect coordinating how they all interact and if they are consistent with each other.

Discussing about microservices can be very confusing if people are thinking about different things and, as always, when there's a good idea implemented in a specific context and people who don't use it get exposed to it as a silver bullet we'll end up with a huge backlash like in this article.


Putting the I back in Microservices :)


I have a web app with a "file store" ( eg. simple dropbox over API) that i manage. I haven't touched it in a year and it uploads files, proxies them, resizes, ...

And it's just one of many.


Real-world business servers are complicated even when you tease apart the different elements.


I think it has to do with control. I've written microservices and microservices tooling now for a long time... so long when I started we were just calling it something like (isolated-responsibility) SOA and we didn't have a fancy buzzword.

Developers want to own their thing. Microserivces desire springs up because of a lack of communication culture and desire for siloification in a companies organization to keep various interests from bothering the developers. Those almost always point to a failure of management in my mind rather than a technical failure.


Is this truly a management failure?

If a few things are true, I could see this as a win:

* I can isolate my developers from outside interests using microservices. * My developers are more effective in each dimension (quality, retention/happiness, velocity) because they are isolated from outside interests. * My software is easier to operate and more reliable because it is a microservice.

If any of these three things aren't true, then I agree. But I'm not sure that a "communication culture" can scale to a large organization and I'd like to see a truly large company (1000+ developers) successfully doing so. I've seen more success come from separation of concerns and well-deployed microservices seem to be fairly effective to this end.


Microservice architectures [usually] represent a management failure because they usually don't work well in the real world. It's easy to concoct a paper-only, theoretical version of an idealized microservice architecture, since you can gloss over all the real-world details and practicalities. Mapping that theory into the real physical world is a whole different ball game.

Conway's Law might as well be renamed "The Law of Microservices". Per Wikipedia [0], it states:

> "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations."

"Microservices" are on a tear because they make a perfect cover for the blatant and bare expression of unbridled Conway's Law.

Such unbridled expression is much easier in the course of greenfield development, because it lets the core team of 2-3 people per service go about their development work without consulting anyone external. It lets them throw away any overriding convention or cultural concerns, and it avoids the difficulties of cross-group coordination. But it leads to a completely unmaintainable wreck when things transition into production.

This is not to say that no one will have a successful microservice deployment or that it's always a bad choice, but it usually goes way off the rails.

[0] https://en.wikipedia.org/wiki/Conway%27s_law


This I can entirely agree with - framing my point around Conway's law works well and I think I can reframe my question as "is the expression of Conway's law a management failure, or is it the mis-structuring of the organization causing the expression to be harmful?"

The parent post I was replying to seemed to simplify things down to "let your developers communicate and they'll build a more coupled system that works, instead of a morass of microservices that don't." That's another thing that looks good on paper but doesn't scale, at least in my experience.


I think my point was misunderstood, probably because I did a poor job of explaining it as I was cramming a hamburger in my facehole and watching the Pats win the AFC :)

I think in many cases microservice architectures appeal to engineering organizations with poor communication and cooperation skills where developers desire to be strongly independent because of the lack of management creating a cooperative and coordinated dev and work environment. I think that's actually saying something very similar to the Conway Law idea brought up by the other poster.


I wouldn't dismiss the effect of résumé based development either. Even if subconscious, having the latest buzzwords on your CV is a motivator.


For me that'a conscious motivator :-)


" Microserivces desire springs up because of a lack of communication culture and desire for siloification" - Would have upvoted this more than once if possible.


To put it in an even less flattening way, the real problem are developers, not the paradigm. Your company's code will be as good as your developers are regardless of the paradigm.

Microservices will not help you if your developers have the same level of skill and foresight as whoever wrote the monolith, which is probably true if those devs were selected by the same hiring process that your company has today, subject to the same organizational effectiveness, etc.


I think you actually have hit the proverbial nail with the hammer. I've seen firsthand the terrible talent pool, at least here in the south-west US. We've done ourselves a disservice trying to get everyone and their brother to become a programmer because...economy! and more accurately, I want more money.


They haven't hit the proverbial nail with the hammer, they've hit the railroad spike with a sledge; IMO it's 100% about your environment and the demands and obligations your development team has against it's own obligations to deliver. I'm not going to say microservices are the cornucopia, smoking gun or even smoldering slingshot, but much of the dissent in this thread seems predicated on the idea that they are the end result and not merely an optional path to take for desired outcomes of deliverability.


Re: ...the real problem are developers, not the paradigm. Your company's code will be as good as your developers are regardless of the paradigm.

Indeed! Good developers/architects spot repetition or weaknesses in current techniques and can often devise solutions that can be added to the stack or shop practices with minimal impact. You don't necessarily need paradigm or language overhauls to improve problem-areas spotted. Poor developers/architects will screw up the latest and greatest also.


"Your company's code will be as good as your developers" as good as your companies nn% of worst developers


> In my opinion, only the extremely good developers seem to comprehend that they are almost always writing what will be considered the "technical debt" of 5 years from now when paradigms shift again.

I've also seen really bad developers with that attitude: it's all crap, so just ship whatever already.

The good developers write code that can be replaced, rewritten, or rescaled later. Though, charitably, both monolithic service and microservice people are trying to do exactly that. It's just what sort of scale they're thinking about and what part of the software development lifecycle they think will be especially difficult going forward.


The critical distinction between "it's all crap, so just ship whatever already" and what the grandparent wrote is that "technical debt" doesn't reside in easily disposable code/components, but rather as may-need-to-be-rectified-but-maybe-not-right-now downsides in what is enormously useful and producing value.

Good developers create code that's prepared for the possibility of being modified repeatedly and becoming foundational; on the other hand, preparing for code/components to be thrown away is a no-op.


> ...preparing for code/components to be thrown away is a no-op.

You might be agreeing, but I find that the bad code that sticks around is bad code that is hard to get rid of. So in that sense, code that is removable isn't a no-op. It takes some effort, for instance, to keep a clear dividing line between two components so that either may be replaced someday.


> The good developers write code that can be replaced, rewritten, or rescaled later.

You make a very good point.

Over the years, I learnt that almost nobody except the developer and maybe one or two peer-developers cares about good quality code. The management just wants to ship services/products. They don't care how good the code is. All they care is that they can meet their deadlines. Of course, good quality code can increase the chance of meeting deadlines, but working long hours can also increase the chance of meeting deadlines. Management does not understand code, but they understand working long hours.

If I ignore this and still care enough to write good quality code, in nearly all projects, I am not going to be the only one to work on the code. There is going to be a time, when someone else has to work on the code (because responsibilities change, or because I am busy with some other project). As per my anecdata, the number of people who do not care about code quality far exceeds those who do. So this new person would most likely start developing his features on the existing clean code in a chaotic manner without sufficient thought or design. So any carefully written code also tends to become ugly in the long term.

In many cases, you know that you yourself would move out the project/organization to another project/organization in a year or so, and the code would become ugly in the long term no matter what you do, so why bother writing good code in the first place!

It is very disappointing to me that the field of programming and software development that I once used to love so much out of passion has turned out to be such a commercial, artless, and dispassionate field. How do you retain your motivation to be a good developer in such a situation?


> I learnt that almost nobody except the developer and maybe one or two peer-developers cares about good quality code.

No, but they do care that this seemingly (to them) trivial change a year later takes two weeks instead of half a day.

> How do you retain your motivation to be a good developer in such a situation?

I honestly don’t know. I’m currently taking a little time out from work, but I’m dreading going back in a month because right now, I am completely disinterested in computers and especially software, things I used to love and be incredibly passionate about. Now with stuff like meltdown and spectre, technology just seems like this dumb house of cards and I have no energy for this BS. I’m pretty sure its burnout and it will pass eventually, but I just hope it does so soon as I don’t know how else to pay the bills.

On the plus side, I’ve spent a lot less time at the computer these last few weeks and spent time on other interests, including learning sleight of hand and card magic. :-P


> good quality code can increase the chance of meeting deadlines, but working long hours can also increase the chance of meeting deadlines.

No, that's not true, except in a very superficial sense. Yes long hours _can_ increase the chance of meeting deadlines... but often, it doesn't - or more precisely, it only works if the code quality is decent.

"Code quality" is not about following whatever patterns are en vogue today, or using the latest dev language. It is mostly about simplicity - dealing with few things at a time, and making those things explicit. If you need to understand the entire solution, and the entire domain model, and all the edge cases before making the tiniest of modifications - long hours are not going to help you.

To go back to microservices: many companies claim to build microservices, but actually build a distributed monolith. This doesn't help productivity, it actively harms it.


As peopleware says, overtime is never free. Yes, it definitely can help you meet deadlines if its done every so often, but if its a regular thing, then “normal” productivity will be much lower, so it can’t sustainably help meet deadlines again and again and again.


Maybe go to where hardcore stuff is done? Compilers or Spaceships satellites airplanes? I've no idea... :/ Tell me if you find one. I just stopped it and started drawing again at least anyone can 'see' it. Well modern art is a lot of bullshit too ;)


I think that's a problem of money flooding the gates. Until just recently I was afraid of the tech bubble popping, but with seeing people, who never cared or understood tech entering the field I'm afraid it may never pop. Of course, if I was any good I shouldn't care about that and be working my own way, but that's not the case.


There are good organizations out there, at least teams that are good. Problem is most organizations are just in it for the short term...


Management acts this way because they have no way of evaluating the effects of maintaining code quality. Only developers with significant experience are able to evaluate the long-term impact of each technical decision (and even we are still not very good at that).

<rant target="not you">

And why should management be able to evaluate technical decisions and code quality in the first place? That's our job! The problem is that we have given management the false choice of lower quality + faster shipping vs. higher quality + slower shipping, when in fact we should not have given them any say in the matter. And before we hit me with the but we have to be first to the market and fix it later line, I need to point out that companies don't die because they were not first to the market, they die because their operations and development became so slow and costly that they could not compete anymore.

What we should do as developers is to stop talking about code quality to our managers! When we are asked for an estimate, we give them as accurate estimate as possible with the code quality that we feel is sufficient for long-term maintainability of the system. And we don't negotiate on quality anymore, and especially we don't negotiate on estimates! Only functionality (MVP and all that). Then we don't need to ask for refactoring time, rewrite time, code polish time, stabilization time on our systems (that we hardly ever get anyway), because it is all in there in the original estimate.

Management expects stable productivity, they base their estimates of operational costs and investment costs on the number of people working on the system, not on the age of the codebase (why should an old codebase cost more to work with? they ask). If we give them false hope on the productivity of the team by producing crap fast in the beginning, the whole business case may collapse when we produce the same crap slower and slower and slower later. Management is in no position to evaluate the effects of bad code on the business case, because they don't understand that. We do. The only thing we can do as developers, is to remove the option of low quality code altogether.

And we say that it is so slow to create quality code? And estimation is hard?

We learn it. We can write high quality code as fast as the usual junk we see in most systems. We keep track on our estimates and evaluate how well we did, and improve. But it takes effort. All I can say is that it is our responsibility.

</rant>

On a positive note, the solution to this on a personal level is to find a place to work where technical excellence is built in the development culture (and there are such places), and cultivate that culture especially with the new hires (mentoring, pairing, etc.).


The #1 thing they should teach in any engineering school (or maybe any school period) is "you shouldn't remove/replace/change something until you've understood why it was done that way in the first place". Or maybe a somewhat equivalent version: "you're not as clever as you think you are and the people who came before you were not as dumb as you think they were".


This principle is commonly known as Chesterton's fence. https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence


Would those graduates be rewarded or penalised for their professionalism? I would guess penalised in a depressing number of situations.


>"you shouldn't remove/replace/change something until you've understood why it was done that way in the first place".

That only works until you encounter something for which there was no rational reason in the first place.


I've yet to encounter anything erected for no reason at all. But there are plenty of projects where rationality didn't have much a role in their choice of creation. Examples like that can usually be tied to a previous dev/engineer wanting to pad their resume with a new project, maybe with a new framework. Or a manager who wanted their team to have more completed work to look better on some miscalculated performance metric. Failing those options - it could be a bored employee who just wanted to make _something_.

That is to say - most people are acting rationally, but their context to an outsider may not appear rational. Maybe there's something to be said for a general relativity of rationality?


A little over two years ago my little team of three (gasp!) succumbed to the allure of microservices. After six months of writing custom solutions for problems I'd created, I bagged it. The _one_ upside of our microservice architecture was how simple it was to consolidate back into a single app. Only took a couple months. I believe the theoretical benefits of microservices, namely, hard domain boundaries, remain compelling, but geez, I also learned its benefits are extremely circumstantial.


Props to you for realizing you were going down a bad path and cutting your losses. Many people are unwilling to honestly evaluate their projects until they've been in the rear view mirror for a long time. A great deal of software is driven by ego and fads, and it's great to see someone make a decision to change pace based on practicalities.


I've been writing code since the 70's. The further back in time one looks, the worse my code is. The good news, I suppose, is that one is never finished learning how to write code better. Until they plant me, that is.


Do you legit think that people are all just doing this as a fad?

There's legitimate arguments for looking at these patterns, the big one being "isolation of concerns". The biggest counterargument is that the ops cost is much higher than assumed, of course.

The idea that the existing code base could have problems shouldn't be a surprise to anyone. Amazon almost fell over because of their code base. Twitter too. And its not even not doing it right, but simply that scales change. Or patterns change.

And in new companies, it _could_ be that people don't get it.

"Microservices as mass delusion" discounts a lot of people who are really thinking hard about how to handle the pros and the cons of things.


> Do you legit think that people are all just doing this as a fad?

Yes. That's been my experience.

> The idea that the existing code base could have problems shouldn't be a surprise to anyone. Amazon almost fell over because of their code base. Twitter too.

Most organisations aren't Amazon. Most organisations aren't Twitter. And even these web-scale organisations aren't as all-in as the microservice advocates. (I worked at last.fm for a time and while we did many things that could be classed as "microservices" from a certain perspective, we didn't blindly "microservice all the things")

> "Microservices as mass delusion" discounts a lot of people who are really thinking hard about how to handle the pros and the cons of things.

Most fads start from a core of sensible design. The web really did revolutionise commerce, but many "x, but on the web" companies of the late '90s really were dumb.


> Do you legit think that people are all just doing this as a fad?

As you say there are a lot of pros and cons to any architecture or paradigm, which is why we're still talking about it and saying things like "right tool for the job" and not just using the One True Method(TM).

I legit think that a lot of people using the new hotness as a form of cargo cult programming with no understanding of the methods they're considering or how they apply to the problems they're trying to solve.

It's not just microservices that are improperly applied. I've been in the industry long enough to see dozens of languages, technologies, paradigms, processes, and everything else hailed as the second coming of christ and applied inappropriately all over the place until the shinyness wore off.

And I mean... When we start talking about developing for Amazon scale, we're already talking about situations that don't apply to 99% of developers. Not a great argument that their cases aren't inappropriate applications of the pattern.


The ops cost of not using microservices is a lot higher than you'd think, too. At some point, when you have hundreds of engineers and your monolith is compiled together from libraries written by a dozen different teams and you have to try and make one heroic release per week, except half the time it fails and you have to go back and fix it, and absolutely no one in the company can ship a new feature because you're blocked.


You don't need microservices to fix that, you need CI and maybe some release processes for your libraries.

There will come a point where you can't scale a monolith, sure, but that point is thousands rather than hundreds of engineers.


Also, there is a nice place between microservices and a monolith


I'm gonna just be honest here and say that I don't really know the difference between "microservices" and "service-oriented architecture" except for the notion that "microservices" implies breaking up your services before they get too big and hairy, which kind of seems obvious.

If some companies go completely nuts and deploy 5 microservices per developer, then yes, that is madness. If all of those microservices are REST/JSON microservices, that's madness squared because you're wasting half your time in serialization/deserialization. If you're managing all of this stuff with Kubernetes because it's trendy rather than because it's actually necessary for your use case, then you're probably making a bad call.

But ultimately, services give development teams ownership over features in a complete, end-to-end way that is fundamentally impossible with monoliths once you're past a certain size.


> I'm gonna just be honest here and say that I don't really know the difference between "microservices" and "service-oriented architecture" except for the notion that "microservices" implies breaking up your services before they get too big and hairy, which kind of seems obvious.

The impression I get - and this could be totally wrong - is that the difference is how the services relate to each other.

In this post's example, 5 of the 6 microservices look like something that would be exposed to the user in some way. I would call this a microservice architecture; they're networked together in a way where no one thing orchestrates the others. They likely all touch the same data storage, but the overall structure is mostly flat.

In a service-oriented architecture, there'd be a tree-like or graph-like hierarchy. To an end user, it would look monolithic, but the monolith would, behind the scenes, delegate to the "user-facing" services such as Upload and Download. Upload and Download would then use the Transcode service as appropriate. But the important part is that this would all be one way: Transcode isn't allowed to contact Upload or Download, just return the result to whatever called it.


> the pattern I've seen time and time again is Developers thinking the previous generation had no idea what they were doing and they'll do it way better. They usually nail the first 80%, then hit a new edge case not well handled by their architecture/model

Perfect, I've seen this happen many times as well. I think you're generous on the 80% part. Usually they nail the first 50%, but they time they get to 80% its getting just as messy and some of the developers are planning another rewrite again.


It's not exclusive to inheritance, though. There are lots of times that the original principal decides that he could have done it better and winds up in exactly the same place. It could be progress, too, if it trades on set of failure modes for another that is less severe or less frequent.

This is why a team needs access to good architect who's seen the paradigms shift, or even cycle. You're almost never starting from scratch, so you really need someone who's able to incorporate better or more suitable tech without throwing out the baby.

If you're microservices-based, that last part is easier, even if it falls into one of the described pitfalls, e.g., system-of-systems.


This is not wrong; this is I think also the reason behind the "framework of the month" craze in JS that seemed to be a thing last year / year before. Implementing business value is boring; working with new technology is cool and fun. I'm not impervious to that either. I mean I'm writing a HN comment while I should be building a feature :p


You need to be keeping a ledger on technical debt, the same way any and all other "debt" is tracked - http://www.hydrick.net/?p=2394

"Here’s the thing, most of the time we do something that incurs technical debt, we know it then. It’d be nice if there’s a way for us to log the decisions we made, and the debt we incurred so we can factor it into planning and regular development work, instead of trying to pay it off when there’s no other alternative."


I was wondering how such a ledger might work, given that it should be about as difficult to quantify this kind of debt as it may be to estimate the time and effort required to implement certain changes in general. Later on in the post, there's this nugget of gold:

    So how would this ledger work? Well, for starters it has to
    track what we can’t do because of current technical debt.
    It should also be updatable to note any complications to 
    subsequent work or things you can’t do yet because for old 
    design decisions. At this point, you’re tracking the 
    “principle” (the original design decision causing technical 
    debt) and interest (the future work that was impacted by 
    the debt).
That's it, isn't it? You need to define the principle and the interest, and these two are actual tangible things in the form of specific decisions (principle) and things that are now adversely affected by them (interest.) If these are linkable, then it should become trivial to compute a number to this debt, whether it is simply just the number of things adversely affected or some other kind of aggregate like the combined estimated effort of those adversely affected things. This debt could probably be calculated in many different ways, but the fact that you can properly quantify it should make decisions on whether to tackle or ignore the debt much more informed.

This was an eye opener for me – thanks for sharing!


Figuring how to link the “principal” and “interest” is definitely the lynchpin ... but if you’re not even noting there is principal and interest, you’ll never be able to recover


While I'm not arguing with you, I have had a different experience working with developers and microservices, perhaps because the teams have been more seasoned/experienced and there is more of a collaborative environment.

I found that a lot of the time developers start moving towards microservices when they find that a monolithic app becomes too difficult to work on. For example, multiple teams working on the same codebase will often have accidental code conflicts. Plus, scaling a monolithic app because one part of it is under load isn't always cost effective or logical. So, teams will start to break off components into microservices to make development easier and less painful. Naturally this has to be weighed up as microservices bring a different set of challenges, 'gotchas', etc, etc but in my experience the teams have done a proper job discussing the pros and cons.


I think that kind of process, slow decomposition based on performance and requirements, is the only sane approach.

It's also reflected in how we manage code at the micro-level: collecting related logic into a module until it becomes unwieldy and then separating out independent sub-functionality into their own modules and dependencies as they grow...

There is no right size for a class. Smaller is better, but the ideal is "right sizing". What's right? Well, that's tricky, but whatever doesn't hurt is pretty ok.

There's no right size for a service. Smaller is better, but...


Yep, there is no real "one size fits all" to microservices


I 100% agree. I even catch myself thinking I didn't know what I was doing when looking at code I wrote years before. I'll jump into rewriting it and discover it was written that way for a reason. I've learned to trust my former self wasn't an idiot.


That's absolutely true, but it's not specific to microservices. The pendulum could very well swing back the other way

I've come to view microservices in the context of Conway's Law. If you have a team of developers working on a project who don't like to communicate or work with each other, do not understand version control, and all have different programming styles and technology choices, the only feasible architecture is one service per person.

I have no trouble believing that this is what's really behind Netflix's adoption of microservices. From what I've heard it's a sociopathogenic work culture, and if I worked there I would probably want to just disappear from everybody too.


You do have to use Conway's Law to your advantage. At least be very aware of its effects.


The monolith first approach has always served me well. Nascent projects benefit from portability because they need higher amounts of flux. As they mature let's assume they grow in scale and integrations and somewhere along the line it becomes sensible to break off pieces into services.

To me the big benefit of microservices is scaling out components into flexible independent release cadences but the trouble comes with employing them too early.

https://martinfowler.com/bliki/MonolithFirst.html


I think these are appealing for 2 reasons:

1. There is a believe, that component isolation (taken to extreme by microservices) enables better productivity of the development department.

That is more features, more prototypes, more people can be moved in and out a given role. So that those 5 crusty programmers are not a bottleneck for the 'next great idea' that a Product manager or CIO reads up on.

2. There is a constant battle for the crown of "I am modern" (eg data science, micro services, big data ) That is going on in every development or technology organization. Where the closer you are in your 'vision' to google or Netflix, the more 'modern' you are.

The rest of the folks is 'legacy'. So you get budgets, you get to hire, you get to 'lead'. Micro-services is the enabler to help to win this battle (although, probably, for a short term).

---

I personally, do not believe that microservices bring anything new compared to previously used methods of run-time modularization :

  Plugins
  Web services
  RPC
  N-tier architectures

I do not think they replace the standards like CORBA, although I think they will end up eventually replicating it, with better thought out standards and tools.


I don't think microservices are loved by developers so much because they are technically superior but because they allow for quicker/safer decision-making on management level - just like "agile development" imho only helps trainee-devs but shines in providing clear communication strategies between management and developers (read: keeps mgmt off dev's back for at leat 6h a day).

Abstractly spoken, I don't care whether you call f(x) directly, via IPC, RPC or as a microservice. In my preferred programming languages there is not much of a difference anyway.


If something isn't broken, do not fix. I think part of it is that large companies can keep engineers happy by giving them rewrites. Otherwise, not enough projects to keep everyone entertained.


Well, the best architectured project I've ever worked on in Google was actually rewritten 4 times from scratch. I believe rewriting is always good for the project. Not always for business, though. Fortunately, Google had resources to allow rewrites to happen.

Rewrites also serve as thorough code review and security audit.


Were the previous ones not getting the job done? 4 rewrites over what time period?

I am more in the camp of "let's do something useful" than "let's rewrite this because previous guy didn't do it good, or it no longer meets our demands". Because whatever you do it will get rewritten again, and it's imho useful to resist the urge.

Ps: also a googler.


This is usually true but I’ve recently been introduced to a legacy codebase which is so bad it can hardly be modified.

Mixed tabs and spaces, sometimes one space indentation or no indentation at all, 1000+ line Java methods, meaningless variable names, no comments or documentation. SQL transactions aren’t used, the database is just put into a bad state and hopefully the user finishes what they’re doing so it doesn’t stay that way. That’s just the server. The UI is just as bad and based on Flash (but compiled to HTML5 now)


An important piece of advice I was given when embarking on my technical career:

> You don't solve problems. You take the problems you have, and exchange them for a different set of problems. If you're doing your job, the new problems won't be as bad as the old problems. That's all you can really do.



The reinvention and re-discovery of the problems can be very good for the new developers who are taking over a slice of the monolith's functionality. And it an happen rapidly, on the new developers' own terms. Depends on the case.


I wonder why. Isn't it so unproductive to keep tearing down stuff and rebuilding them with the next shiniest tool ? I mean, you are making very little progress on product features.


But in fact, K8s provides more robustness than old good kinda monolith Pacemaker


Damn, you described my manager.


Biggest issue with microservices: "Microservices can be monoliths in disguise" -- I'd omit the can and say 99% of the time are.

It's not a microservice if you have API dependencies. It's (probably) not a microservice if you access a global data store. A microservice should generally not have side effects. Microservices are supposed to be great not just because of the ease of deployment, but it's also supposed to make debugging easier. If you can't debug one (and only one) microservice at a time, then it's not really a microservice.

A lot of engineers think that just having a bunch of API endpoints written by different teams is a "microservice architecture" -- but they could't be more wrong.


Once when starting a new gig I inherited a "microservices" architecture.

They were having performance problems and "needed" to migrate to microservices. They developed 12 seperate applications, all in the same repo, deployed independently it's own JVM. Of course if you were using microservices, you needed docker as well, so they had also developed a giant docker container containing all 12 microservices which they deployed to a single host (all managed by supervisord). Of course since they had 12 different JVM applications, the services needed a host with at least 9GiB of RAM so they used a larger instance. Everything was provisioned manually by the way because there was no service discovery or container orchestration - just a docker container running on a host (an upgrade from running the production processes in a tmux instance). What they really had was a giant monolithic application with a complicated deployment process and an insane JVM overhead.

Moving to the larger instance likely solved the performance issues. In place they now had multiple over provisioned instances (for "HA"), and combined with other questionable decisions, were paying ~100k/year for a web backend that did no more than ~50 requests/minute at peak. But hey at least they were doing real devops like Netflix.

For me, I've become a bit more aware of cargo cult development. I can't say I'm completely immune to cargo cult driven development either (I once rewrote an entire Angular application in React because "Angular is dead") so it really opened my eyes how I could also implement "solutions" without truly understanding why they are useful.


> They developed 12 seperate applications, all in the same repo, deployed independently it's own JVM.

I've dealt with an even worse system, with a dozen separate applications, each in its own repo, then with various repos containing shared code. But the whole thing was really one interconnected system, such that a change to one component often required changes to the shared code, which required updates to all the other services.

It was a nightmare. At least your folks had the good sense to use a single repository.


Agreed. Multiple closely dependent repos by one organization is the real nightmare.


> then with various repos containing shared code

What source control system?

Also, from the article:

> even though theoretically services can be deployed in isolation, you find that due to the inter-dependencies between services, you have to deploy sets of services as a group

This is the situation we are in, like you were.


> What source control system?

Git in our case. And our direction was not to use submodules or anything like that to make life manageable. It was pretty unpleasant.


You don't always have to know why but it's somewhat frightening that so many "engineers" don't have a clue why they are doing something (because Google does it). And I'm of course guilty of it myself jumping on the hype-train or uncritically taking advice from domain experts only to find out years later that much of it was BS. Most of the time though, you will not reach enlightenment. I guess it's in our nature to follow authority, hype, trends and group think.


They're only following their incentives.

What's gonna look better on a devs CV: 'spent a year maintaining a CRUD monolith app' vs 'spent a year breaking monolith into microservices, with shiny language X to boot'.

We can be a very fashion and buzzword driven industry sometimes.

EDIT: this perverse incentive goes all they way to the top, through to CTO level. Sometimes I wonder if businesses understand just how much money and effort is wasted on pointless rewrites that make life harder for everyone.


"Resume driven development" :)


> it's somewhat frightening that so many "engineers" don't have a clue why they are doing something (because Google does it).

This doesn't stop at engineering; open offices, teaser/trick based interviewing, OKR's, ... Even GOOGLE doesn't do some of those things anymore, but the follower sheep still do.


I recently had a similar experience; our product at work is a monolith not in the greatest shape as it has technical debt which we inherited and our product is usually used condescendingly when talking to other teams working on different products. To our surprise when we started testing it with cloud deployments, it was really lightweight compared to just one of the 25 java micro-services from the other teams.

Their "microservices" suffered from the same JVM overhead and to remedy this they are joining their functionalities together (initially they had 30-40).


I'm switching to go partially due to the jvm. Hopefully I'll get better partitioning on a single small box as I start.


>They were having performance problems and "needed" to migrate to microservices. They developed 12 seperate applications, all in the same repo, deployed independently it's own JVM.

9 times out of 10 it's because developers don't know how to properly design and index the underlying RDBMS. I've noticed there is a severe lack of knowledge of that for the average developer.


Sounds like they don't understand why it's called a microservice to begin with. They're not supposed to be solutions an entire piece of software, just dedicated bits at least that's what I'd figure with a name such as "micro". When we adopted microservices at my job (idk if Azure Functions count or not) we did it because we had 1 task we needed taken out of our main application for performance concerns and because we knew it would involve way more work to implement (.NET Framework codebase being ported to .NET Core which meant the dependencies from .NET Framework did not work anymore in .NET Core) but we eventually turned it into a WebAPI instead due to limitations of Azure Functions for what we wanted to do (process imagery of sorts).


> idk if Azure Functions count or not

Azure Functions are technically a "serverless" product, but using them as y'all intended to is a textbook definition of a "microservice" :)


> Of course since they had 12 different JVM applications, the services needed a host with at least 9GiB of RAM so they used a larger instance.

well experimentally oracle solved that problem, somewhat. you could now use CDS and *.so files for some parts of your application. it probably does not eliminate every problem, but yeah it helps a bit at least. but well it would've been easier to just use apache felix or so to start all the applications on a OSGI container. this would've probably saved like 5-7 GiB of RAM.


> A microservice should generally not have side effects.

That's plainly wrong. I get the gist of what you are saying and I more or less agree with it but you expressed it poorly.

Having API dependencies is not an issue. As long as the microservices don't touch each others data and only communicate with each other through their API boundaries microservices can and should build on top of each other.

In fact that's one of the core promises of the open source microservices architecture we are building (https://github.com/1backend/1backend).

I think your bad experiences are due to microservice apps which are unnecessarily fragmented into a lot of services. Sometimes, even when you respect service boundaries that can be a problem - when you have to release a bunch of services to ship a feature that's a sign that you have a distributed monolith on your hands.

I like to think of services, even my services, as third party ones I can't touch. When I view them this way the urge to tailor them to the current feature I'm hacking on lessens and I identify the correct microservice the given modification belongs to easier.


> That's plainly wrong. I get the gist of what you are saying and I more or less agree with it but you expressed it poorly.

I'm not sure what you think side effects are, but I'm using the standard computer science definition you can look up on Wikipedia. If you have a microservice that modifies, e.g. some hidden state, it's a disaster waiting to happen. Having multiple microservices that have database side-effects will almost always end up with a race condition somewhere. Have fun debugging that.


I'm using the same definition. Writing data to a database is a side effect. If there is no side effect then what's the point of calling a service? So it does computation? Then who saves the result of that computation, thus doing a side effecting operation?

If no one then what's the point of that service's existence?


A side effect is by definition some mutation that's out of the scope of the function -- if the purpose of the microservice is to put stuff in a database, then (by definition) it's not a side effect. Switching a flag on top of doing some work, on the other hand (e.g. flip "processed" to true in a global database) is a side effect.


Modifying data in a database is a side effect. Since you brought up the wikipedia definition, here it is:

> In computer science, a function or expression is said to have a side effect if it modifies some state outside its scope or has an observable interaction with its calling functions or the outside world besides returning a value. For example, a particular function might modify a global variable or static variable, modify one of its arguments, raise an exception, write data to a display or file, read data, or call other side-effecting functions.

Note "write data to a display or file". I think we agree that writing to a database falls under this definition, hence using terms like "side effecting" when talking about microservices is misleading.


Yeah, I stand corrected. I am indeed conflating scope with purpose. Should've been clearer in my OP.


Technically it depends on if anything outside the service can see the database and if the state is saved after the service returns. It might seem useless to call a DB if you're not going to save state, but a Rube Goldberg architecture could do so for the lulz.


If a function writes to a DB, yes, that's a side effect. I think you're conflating "scope" and "purpose".


Correct, my bad.


So, according to your advice, we shouldn't use micro-services that write to a database, ever? That doesn't make any sense to me. Multiple services writing to the same database can be bad, but a single service storing persistent state in a database is perfectly fine. Just because we're micro- doesn't mean we have to cut out 90% of what a service is.


How would services such as "sign up", "login", "edit profile", and others that need auth be split?


Just like all the other authenticated APIs in the world: you get a token when you log on, and use that token to authenticate yourself on future calls to the services. That's a lot of what OAuth and its ilk handle.

This management of API boundaries is likely handled for you by an app, though, so from a user perspective the story is still "open netflix, enter password, watch movie".


But how is the data shared? E.g. when you sign up you store your data. But to edit it, or to just display it you need to access it again, but that data only belongs to the sign-up microservice...?


> A microservice should generally not have side effects

I gotta ask, how is this realistic? A salient feature of most of the software I've worked on is that it has useful side effects.


It isn't realistic, and borders on absurd gatekeeping.


I think that it is accurate to say that in a system composed of microservices, a microservice should not effect the state of other microservices in the system other than by consuming them.

Whether it should consume other microservices is less clear, and gets into the choreography vs. orchestration issue; choreography provides lower coupling, but may be less scalable.


Services consuming other services, sounds like recipe for spaghetti. I hope people use layering where you don't call stuff that is in the same layer.


> Services consuming other services, sounds like recipe for spaghetti.

Can we extend that logic to classes or interfaces? Accessing data operations through a well-established API is generally seen as a good thing and is the exact cure for spaghetti...

Service APIs also entail load balancing and decoupled deployments, so they eliminate unclear architecture that arises at the app level when trying to tune the whole for individual components. Particularly when a shared component exist across multiple systems.

For a generalized microservices architecture: layering is a bit of a misnomer as everything is loosely in the same 'service' layer... I'd also point out that in N-tiered applications application services or domain services calling other services at the same layer is seen as the solely approved channel for re-use, not an anti-pattern.


>Can we extend that logic to classes or interfaces?

This was one of the main ideas behind the original definition of OOP. The original notion of "object" was very similar to our current notion of "service":

https://www.youtube.com/watch?v=QjJaFG63Hlo

Objects received messages, including messages sent over the network. There was not supposed to be a clear distinction between local and remote services - by design. A lot of inter-computer stuff could be/was handled transparently.


Look, I generally prefer choreography over orchestration, too, but architectural dogma can conflict with pragmatism, and at a minimum Netflix’s argument as to why they found orchestration more usable at scale seems plausible enough for me not to reject it out of hand without experience with the two patterns at anything like Netflix scale.

Just like sometimes, in real-world C, “goto” is the right tool, even though arbitrary jumps of that kind are also a “recipe for spaghetti”.


A microservice, imo, should just be a simple black box that takes in some input and returns some output (sometimes asynchronously). No side-effects necessary. No fiddling with database flags or global state, and definitely no hitting other microservices. See @CryoLogic's post for a good example. This means that you simply can't build some things using microservices -- like logging in a user -- and you'd be right.


Aren't you talking about pure functions, more applicable to serverless than the commonly accepted use of microservices?


> A microservice, imo, should just be a simple black box that takes in some input and returns some output

I am not quite sure what you mean. A microservice with REST api that has POST method is not a microservice?


But that is ridiculous. Fiddling with database flags is silly, I agree, but inserting and updating database is a completely normal side effect of most business logic. So if your microservice handles any kind of ordinary feature of your business solution, it will almost definitely have side effects because it will write to database. There might be services which just do some computation in memory and return result to you but I think those will be a small minority, most of your services dealing with features such as payment, subscriptions, identity etc (just some examples) will have useful side effects.


I agree, mostly. A black box can have internal state, but it should not have shared state with another black box (defeats the purpose calling it a black box). If two black boxes (microservices) shared state, then we'd need to think of the composite as a single black box.

If two black boxes directly contact each other, then that also defeats the purpose. Microservices are not appealing unless talk via message queues. The whole point of microservices was to handle scale independently for independent functions.

Where do you suggest storing that state if it needs to be persistent? The definition of microservices should not assume anything about how long I need to track my data. If two of your microservices are touching the same database fields, then that's the implementor's mistake.


Why can't you build a microservice that does a handshake with an API gateway to authenticate a user? When doing microservices you have to have a sane auth strategy, and that generally means you encapsulate authentication/authorization in a service that your gateway will talk to.


    a bunch of API endpoints written by different teams is a "microservice architecture"
Or chaos, or madness, or Bedlam.

Most people have enough trouble getting three methods in the same file to use the same argument semantics. Every service is an adventure unto itself.

We have a couple services that use something in the vein of GraphQL but some of the fields are calculated from other fields. If you have the derived field but not the source field you get garbage output and they don’t see the problem with this


> It's not a micro-service if you have API dependencies

Just out of curiosity, what alternatives are there to avoid API dependencies? Is it really possible to make non-trivial apps while avoiding internal dependencies?

At some level, is it really possible to have a truly decoupled system?


> Just out of curiosity, what alternatives are there to avoid API dependencies?

Very important how the boundaries a drawn. Generally, the more fragmented the micro-services the more API dependencies.

Also, look at the Bounded Context concept.

https://martinfowler.com/bliki/BoundedContext.html

And Conway's Law certainly plays a role.

http://www.melconway.com/Home/Conways_Law.html

> At some level, is it really possible to have a truly decoupled system?

You cannot avoid all the API dependencies, but you can reduced their number.


I'm wondering if the original "API dependencies" comment didn't mean "shared API dependencies". As in, multiple API/services depending on the same shared code/library.

APIs calling other APIs is...well, I'm having a hard time understanding how that could be construed as fundamentally wrong.


I’m confused. If a microservice doesn’t call the api of any other microservices, then when is sending the requests to any of them?

A large purpose of service oriented architecture is encapsulation. If no other microservices can make requests to your microservice, then you really haven’t encapsulated much.


I tend to think that the job of invoking the services lies within a gateway. For example, you can have a microservice for recipes, but a web gateway that know all of the various integrations necessary to generate a page. So the web gateway is essentially a monolith.

If and when you need to support mobile devices independently of your web UI, you can have a mobile gateway. Same idea. This gateway is optimized to know how to handle mobile traffic realities like smaller download sizes, etc.


I'm thinking this concept improperly conflates synchronous requests with eventually-consistent asynchrony.

No, you definitely don't want microservices making synchronous requests to other microservices and depending on them that way.

But it still may be necessary for your services to depend on each other, and that's where you can allow that communication through asynchronous eventually consistent communication. Actor communications, queue submission/consumption, caching, etc.


That is nice in theory and I agree it should be done wherever possible but lot of the time business logic will require immediate synchronous response to be returned as a next step in workflow will execute different branch of logic based on condition/result returned from previous microservice and the frontend / consumer / app will need immediate confirmation about whether action succeeded.

Even in such cases you might want to move bulk of processing to asynchronous queue based system but part of the logic might need to be executed synchronously (authorise credit card payment, you can process the payment asynchronously later, perhaps in bulk cron jobs like Apple Itunes does it but initial authorisation which decides whether purchase is successful must be synchronous).


Microservices as the M in MVC?


Microservices is a back-end service pattern. MVC (Model View Controller) is a front-end pattern to enforce separation between data, UI and interaction logic.


Models (should) abstract any data, providing services, and reside on the back end...


I do have a case of monoliths-in-disguise-itis.

I just wish someone with "street cred" (or with a famous, recognizable name I could use for appeal to authority) could create a simple post saying "Hey, if you have a shared data store that all services depend on and are accessing directly, you are not doing microservices". "And you also don't have microservices if you have to update everything in one go as part of a "release"".

That way I could circulate it throughout the company and maybe get the point across. I've tried to argue unsuccessfully. After all, "we are doing K8s, so we have micro-services, each is a pod, duh!" No, you have a monolith, which happens to be running as multiple containers...


Microservices with a share datasource are just a Service Oriented Architecture, circa 2005. You might have a variety of middle tier services, deployed in their own boxes, or at the very least Java service containers, but ultimately talking to some giant Oracle DB behind it. Microservices that share a database are not deployed into a container running JBoss, and instead use something more language-agnostic, but it's ultimately the same thing. All you have to do is quote the many criticisms of that era, when any significant DB change was either impossible, or required dozens of teams to change things in unison.

The best imagery I know for this picture is a two-headed ogre. It might have multiple heads, but one digestive system. Doesn't matter which head is doing the eating, ultimately you have the same shit. I've head semi famous people talk about this in conferences, but few articles.


Thanks for the imagery, I love that. Would have to be toned down a bit for a meeting but I can see it working :)


Martin Fowler on shared DB's: https://martinfowler.com/bliki/IntegrationDatabase.html

So yes, you now have an authority that says doing that is bad.


yes! If you break something out into smaller parts, but they're still entangled, you have actually added complexity instead of reducing it.


> If you can't debug one (and only one) microservice at a time, then it's not really a microservice.

It depends on what you want to debug. It is like unit test vs integration test. If you are finding a bug related to integration between multiple services, you definitely need to debug on multiple services.


Do you have examples of micro services and good data warehouses working well side by side? Your point makes sense, but I keep hoping for a way to have One Data Source Truth working side by side with the services that across it.


A data warehouse really should be completely orthogonal to any architecture choices. Good data warehouses are fed by data engineering pipelines that don’t care if you have a single rdbms or multiple document stores or people dropping CSVs in an FTP directory.

I hate to burst your bubble, but you shouldn’t and can’t have truth working along side systems that access it. Data is messy and tends toward dishonesty. The only way to get clean truth for your organization is by thoughtfully applying rules, cleaning and filtering as you go. The more micro your architecture is, the more this is true. Because there is no way 20different teams are all going to have the same understanding of the business rules around what constitutes good, clean input data. Even if your company is very clear and well-documented about business and data rules, if you hand the same spec sheet to 20 different teams, you are going to get 20 variations on that spec.

The only way to get usable data that can be agreed upon by an entire company (or even business unit) is by separating your truth from your transactional data. That’s kind of the definitional of a data warehouse.

If you let your transactional systems access and update data directly in your warehouse, you are in for a universe of pain.


> The only way to get usable data ... is by separating your truth from your transactional data.... If you let your transactional systems access and update data directly in your warehouse, you are in for a universe of pain.

I strongly agree with this assessment :)

I have posted a bit more on this nearby, but Apache Kafka is well positioned as a compromise to support both of those truths: an orthogonal data warehouse full of sanitized purity and chatty apps writing crappy data to their hearts content.

By introducing a third system in between the data warehouse and transactional demands, Kafka decouples the communicating systems and introduces a clear separation of concerns for cross-system data communication (be they OLAP, or OLTP).

If your transactional data is crappy (mine is!), and you want your data warehouse pure (I do!), then Kafka can be a 'truthy' middle ground where compromises are made explicit and data digestion/transformation is explicitly mapped, and all clients can feast on data to their hearts content.


This is my intuition too - but better written. Thanks!


You might want to look into Apache Kafka, with log compaction, which provides a model to accomplish exactly that while also handing message passing/data streaming.

Your data warehouse can suck facts from Kafka (with ETL on either side of the operation, or even integrated into Kafka if you so desire), and you can keep Kafka channels loaded with micro-"Truth"s (current accounts, current employees, etc). That way apps get basically real-time simplified access to the data warehouse while your data warehouse gets a client streaming story that's wicked scalable. And no coupling in between...

It's a different approach than some mainstream solutions, but IMO hits a nice goldilocks zone between application and service communication and making data warehousing in parallel realistic and digestible. YMMV, naturally :)



You mind as well just say a library of static functions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: