
Goodbye Microservices: From 100s of problem children to 1 superstar - manigandham
https://segment.com/blog/goodbye-microservices/
======
Floegipoky
It seems like splitting into separate repos was a rash response to low-value
automated tests. If tests don't actually increase confidence in the
correctness of the code they're negative value. Maybe they should have deleted
or rewritten a bunch of tests instead. Which is what they did in the end
anyway.

>> A huge point of frustration was that a single broken test caused tests to
fail across all destinations. When we wanted to deploy a change, we had to
spend time fixing the broken test even if the changes had nothing to do with
the initial change. In response to this problem, it was decided to break out
the code for each destination into their own repos

They also introduced tech debt and did not responsibly address it. The result
was entirely predictable, and they ended up paying back this debt anyway when
they switched back to a monorepo.

>> When pressed for time, engineers would only include the updated versions of
these libraries on a single destination’s codebase... Eventually, all of them
were using different versions of these shared libraries.

To summarize, it seems like they made some mistakes, microed their services in
a knee-jerk attempt to alleviate the symptoms of the mistakes, realized
microservices didn't fix their mistakes, finally addressed the mistakes, then
wrote a blog post about microservices.

~~~
dalbasal
This is a common pattern, when it come to semi-idealistic memes like
microservices or agile. I think it's a bad idea to have such hairy, abstract
ideas travel too far and wide.

They become a bucket of clichés and abstract terms. Clichéd descriptions of
problems you're encountering, like deployments being hard. Clichéd
descriptions of the solutions. This let's everyone in on the debate, whether
they actually understand anything real to a useful degree or not. It's a lot
easier to have opinions about something using agile or microservice standard
terms, than using your own words. I've seen heated debates between people who
would not be able to articulate any part of the debate without these clichés,
they have no idea what they are actually debating.

For a case in point, if this article described architecture A, B & C without
mentioning microservices, monoliths and their associated terms... (1) _Far_
fewer people would have read it or had an opinion about it. (2) The people who
do, will be the ones that actually had similar experiences and can relate or
disagree in their own words/thoughts.

What makes these quasi- _ideological_ in my view is how things are contrasted,
generally dichotomously. Agile Vs Waterfall. Microservices Vs Monolithic
Architecture. This mentally limits the field of possibilities, of thought.

So sure, it's very possible that architecture style is/was totally besides the
point. Dropping the labels of microservices architecture frees you up to (1)
think in your own terms and (2) focus on the problems themselves, not the
clichéd abstract version of the problem.

Basically, microservice architecture can be great. Agile HR policies can be
fine. Just... don't call them that, and don't read past the first few
paragraphs.

~~~
itsmenotyou
Interesting perspective. I think that seeking and naming patterns
"microservices", "agile", etc. is useful. It provides something like a domain
specific language that allows a higher level conversation to take place.

The problem, as your identify, is that once a pattern has been identified
people too easily line up behind it and denigrate the "contrasting" pattern.
The abstraction becomes opaque. We're used to simplistic narratives of good vs
evil, my team vs your team, etc. and our tendency to embrace these narratives
leads to dumb pointless conversations driven more be ideology than any desire
to find truth.

~~~
dalbasal
I agree that it's useful, I even think more people should do it more often.
Creating your own language (and learning other people's) is a way of having
deep thoughts, not just expressing. Words for patterns (or abstract ions
generally) are a quanta of language.

I just think there can be downsides to them. These are theories as well as
terms and they become parts of our worldview, even identity. This can engage
our selective reasoning, cognitive biases and our "defend the worldview!"
mechanisms in general. At some point, it's time for new words.

Glad people seem ok with this. I've expressed similar views before (perhaps
overstating things) with fairly negative responses. I think part of it might
be language nuance. The term "ideology" carries less baggage in Europe, where
"idealist" is what politicians hope to be perceived as while "ideologue" is a
common political insult statesside, meaning blinded and fanatic.

~~~
parasubvert
The issue is that it is rare and difficult to be able to synthesize all the
changes happening in computing and to go deep. So a certain “Pop culture” of
computing develops that is superficial and cliche’d. We see this in many
serious subjects: pop psychology, pop history, pop science, pop economics, pop
nutrition. Some of these are better quality than others if they have a strong
academic backing, but even in areas such as economics we can’t get to basic
consensus on fundamentals due to the politicization, difficulty of
reproducible experiment, and widespread “popular” concepts out there that may
be wrong.

Concepts like microservices synthesize a bunch of tradeoffs and patterns that
have been worked on for decades. They’re boiled down to an architecture fad,
but have applicability in many contexts if you understand them.

Similarly with Agile, it synthesizes a lot of what we know about planning
under uncertainty, continuous learning, feedback, flow, etc. But it’s often
repackaged into cliche tepid forms by charlatans to sell consulting deals or
Scrum black belts.

Alan Kay called this one out in an old interview:
[https://queue.acm.org/detail.cfm?id=1039523](https://queue.acm.org/detail.cfm?id=1039523)

“computing spread out much, much faster than educating unsophisticated people
can happen. In the last 25 years or so, we actually got something like a pop
culture, similar to what happened when television came on the scene and some
of its inventors thought it would be a way of getting Shakespeare to the
masses. But they forgot that you have to be more sophisticated and have more
perspective to understand Shakespeare. What television was able to do was to
capture people as they were.

So I think the lack of a real computer science today, and the lack of real
software engineering today, is partly due to this pop culture.”

~~~
dalbasal
Interesting take, pop X.

I will take issue with one thing though... Shakespeare's plays were for
something like a television audience, the mass market. The cheap seats cost
about as much as a pint or two of ale. A lot of the audience would have been
the illiterate, manual labouring type. They watched the same plays as the
classy aristocrats in their box seats. It was a wide audience.

Shakespeare's stories had scandal and swordfighting, to go along with the
deeper themes.

A lot of the best stuff is like that. I reckon GRRM a great novelist,
personally, with deep contribution to the art. Everyone loves game of thrones.
It's a politically driven story with thoughtful bits about gender, and class
and about society. But, its not stingy on tits and incest, dragons and duels.

The one caveat was that Shakespeare's audience were all city slickers, and
that probably made them all worldlier than the average Englishman who lived in
a rural hovel, spoke dialect and rarely left his village.

What _is_ an elitist pursuit is not really Shakespeare, it's watching 450 year
old plays.

------
alexrbarlow
I'm not sure what they've been left with is a monolith after all. I would say
they just have a new service, which is the size of what they should have
originally attempted before splitting.

In particular, as to their original problem, the shared library seems to be
the main source of pain and that isn't technically solved by a monolith, along
with not following the basic rule of services "put together first, split
later".

I feel prematurely splitting services like that is bound to have issues unless
they have 100 developers for 100 services.

The claim of "1 superstar" is misleading too, this service doesn't include the
logic for their API, Admin, Billing, User storage etc etc, it's still a
service, one of a few that make up Segment in totality.

~~~
hinkley

        unless they have 100 developers for 100 services.
    

That cure is worse than the disease. Every service works differently and 80%
of them are just wrong, and there’s nothing you can do because Tim owns that
bit.

~~~
topicseed
But if you can explain to the team or the CTO why Tim is doing it wrong and
how it is impacting X, Y and Z, then Tim will fix or be sent else where, no?

~~~
0xEFF
No, not unless you’re somehow able to make the team, the CTO and Tim feel good
at the same time. If you figure that part out let me know.

~~~
topicseed
But if Tim's work is so bad that you can prove it..... How can a boss dodge
the proof? I guess Tim should be fired second, the boss should go first...

~~~
flukus
> How can a boss dodge the proof?

This is a business decision, reality has no influence here.

That's why they invented the term "Business reality".

------
thermodynthrway
I've always said if the Linux kernel can be a giant monolith, in C no less,
than there's maybe 100 web applications in the world that need to be split
into multiple services.

I've worked with microservices a lot. It's a never-ending nightmare. You push
data consistency concerns out of the database and between service boundaries.

Fanning out one big service in parallel with a matching scalable DB is by far
the most sane way to build things.

~~~
dualogy
"Need to" and "sane" are among my favourite subjective terms!

(Further below, I'll go into in which contexts I'd agree with your assessment
and why. But for now the other side of the coin.)

In the real world, current-day, why do many enterprises and IT departments and
SME shops go for µservice designs, even though they're not multimillion-user-
scale? Not for Google/Netflix/Facebook scale, not (primarily/openly) for
hipness, but they do like among other reasons:

\- that µs auto-forces certain level of discipline in areas that would be
harder-to-enforce/easier-to-preempt by devs in other approaches --- modularity
is auto-enforced, separation of concerns, separation of interfaces and
implementations, or what some call (applicably-or-not) "unix philosophy"

\- they can evolve the building blocks of systems less disruptively (keep
interfaces, change underlyings), swap out parts, rewrites, plug in new
features to the system etc

\- allows for bring-your-own-language/tech-stack (thx to containers + wire-
interop) which for one brings insights over time as to which techs win for
which areas, but also attracts & helps retain talent, and again allows
evolving the system with ongoing developments rather than letting the monolith
degrade into legacy because things out there change faster than it could be
rewritten

I'd prefer your approach for intimately small teams though. Should be much
more productive. If you sit 3-5 equally talented, same-tech/stack and
superbly-proficient-in-it devs in a garage/basement/lab for a few months,
they'll probably achieve much more & more productively if they forgoe all the
modern µservices / dev-ops byzantine-rabbithole-labyrinths and churn out their
packages / modules together in an intimate tight fast-paced co-located self-
reinforcing collab flow. No contest!

Just doesn't exist often in the wild, where either remote distributed web-dev
teams or dispersed enterprise IT departments needing to "integrate", rule the
roost.

(Update/edit: I'm mostly describing current beliefs and hopes "out there", not
that they'll magically hold true even for the most inept of teams at-the-end-
of-the-day! We all know: people easily can, and many will, 'screw up somewhat'
or even fail in any architecture, any language, any methodology..)

~~~
codemac
Is there any evidence of this being true?

Do they actually force a discipline? Do people actually find swapping
languages easier with RPC/messaging than other ffi tooling? And do they really
attract talent?!

You make some amazing claims that I have seen no evidence of, and would love
to see it.

~~~
munchbunny
In my experience, there's a lot of cargo culting around microservices. The
benefits are conferred by having a strong team that pays attention to
architecture and good engineering practices.

Regardless of whether you are a monolith or a large zoo of services, it works
when the team is rigorous about separation of concerns and carefully testing
both the happy path and the failure modes.

Where I've seen monoliths fail, it was developers not being
rigorous/conscientious/intentional enough at the module boundaries. With
microservices... same thing.

~~~
kazen44
Also, having a solid architectural guideline that is followed across the
company in several places (both in infrastructure and application landscapes)
makes up the major bulk of insuring stability and usability.

The disadvantage is obviously that creating such's a 'perfect architecture' is
hard to do because of different concerns by different parties within the
company/organisation.

~~~
munchbunny
> The disadvantage is obviously that creating such's a 'perfect architecture'
> is hard to do because of different concerns by different parties within the
> company/organisation.

I think you get at two very good points. One is that realistically you will
never have enough time to actually get it really right. The other is that once
you take real-world tradeoffs into account, you'll have to make compromises
that make things messier.

But I'd respond that most organizations I see leave a lot of room for
improvement on the table before time/tradeoff limitations really become the
limiting factor. I've seen architects unable to resolve arguments, engineers
getting distracted by sexy technologies/methodologies (microservices), bad
requirements gathering, business team originated feature thrashing, technical
decisions with obvious anticipated problems...

------
ChicagoDave
I’ve been tracking the comments and my sense is that almost no one here
believes the business domain drives the technical solution.

Microservices, when constructed from a well-designed model, provides a level
of agility I’ve never seen in 33 years of software development. It also walls
off change control between domains.

My take from the Segment article is that they never modeled their business and
just put services together using their best judgment on the fly.

That’s the core reason for doing domain driven design. When you have a highly
complex system, you should be focused on properly modeling your business. Then
test this against UX, reporting, and throughput and build after you’ve
identified the proper model.

As for databases, there are complexities. Some microservices can be backed by
a key-value store at a significantly lower cost, but some high-throughput
services require a 12-cylinder relational database engine. The data store
should match the needs of the service.

One complexity of microservices I’ve seen is when real-time reporting is a
requirement. This is the one thing that would make me balk at how I construct
a service oriented architecture.

See Eric Evans book and Vaughn Vernon’s follow up.

~~~
lolive
As a microservice agnostic, i wonder how you can deal elegantly with
transactions across services, concurrent access & locks, etc. [Disclaimer: i
have not read the article yet]

~~~
HolyHaddock
You may also want to look into the Saga pattern - I found
[https://www.youtube.com/watch?v=xDuwrtwYHu8](https://www.youtube.com/watch?v=xDuwrtwYHu8)
to be a handy high level overview for applying it to microservices.

Although personally, I've never felt the need to try and apply it
specifically, but the idea is interesting.

~~~
jj12345
Thanks for sharing this video.

When she's discussing Compensations she mentions that the Transaction (T_i)
can't have an input dependency on T_i-1. What are some things I should be
thinking about when I have hard, ordered dependencies between microservice
tasks? For example, microservice 2 (M2) requires output from M1, so the final
ordering would be something like: M1 -> M2 -> M1.

Currently, I'm using a high-level, coordinating service to accomplish these
long-running async tasks, with each M just sending messages to the top-level
coordinator. I'd like to switch to a better pattern though, as I scale out
services.

------
StavrosK
Let me write a meta technology hype roadmap, so we can place these sorts of
articles:

* Old technology is deemed by people too troublesome or restrictive.

* They come up with a new technology that has great long-term disadvantages, but is either easy to get started with short-term, or plays to people's ego about long-term prospects.

* Everyone adopts this new technology and raves about how great it is now that they have just adopted it.

* Some people warn that the technology is not supposed to be mainstream, but only for very specific use cases. They are labeled backwards dinosaurs, and they don't help their case by mentioning how they already tried that technology in the 60s and abandoned it.

* Five years pass, people realize that the new technology wasn't actually great, as it either led to huge problems down the line that nobody could have foreseen (except the people who were yelling about them), or it ended up not being necessary as the company failed to become one of the ten largest in the world.

* The people who used the technology start writing articles about how it's not actually not that great in the long term, and the hype abates.

* Some proponents of the technology post about how "they used it wrong", which is everyone's entire damn point.

* Everyone slowly goes back to the old technology, forgetting the new technology.

* Now that everyone forgot why the new technology was bad, we're free to begin the cycle again.

~~~
tluyben2
I have been doing some tech advice jobs on the side to see what's going on in
the world and it's really scary what I found. Only yesterday I was talking
with the cto of a niche social networking company that has a handful of users
and probably won't get much more who was telling me the tech they use; Node,
Go, Rust, Mongo, Kafka, some graph db I forgot, Redis, Python, React, Graphql,
Cassandra, Blockchain (for their voting mechanism...), some document database
I had never heard of and a lot more. A massive, brittle, SLOW (!) bag of
microservices and technologies tied together where in 'every micro part' they
used best practices as dictated by the big winners (Facebook, Google,
whatever) in Medium blogs. It was a freak show for a company of 10 engineers.
But this is not the first time I encounter it; 3 weeks ago, on the other side
of the world, I found a company with about the same 'stack'.

People really drink the koolaid that is written on these sites and it is
extremely detrimental to their companies. PostgreSQL with a nice boring
Java/.NET layer would blow this stuff out of the water performance wise (for
their _actual real life usecase_ ), would be far easier to manage, deploy,
find people for etc. I mean; using these stacks is good for my wallet as
advisor, but I have no clue why people do it when they are not even close to
1/100000th of Facebook.

~~~
myth_drannon
Resume driven development. I've worked for a similar small company with barely
any users/data but the tech choices were driven by how useful is the tech for
their future job prospects and not the current or future needs of an
organization.

~~~
s73v3r_
Maybe if companies gave sufficient raises and promotions, and actually tried
to retain talent, then we wouldn't have this culture where people keep having
to switch jobs, and therefore always be looking out for what will get them the
next gig.

~~~
fbonetti
I think engineers would still jump ship just as often even if they were paid
more. When you really get down to it, most programming is pretty tedious. What
makes it fun, for some engineers, is the opportunity to learn new things, even
it means doing so at the detriment of the business.

------
kraftman
So they split everything apart because their tests were failing and they
didn't want to spend time fixing them, and they they merged it back together
by spending time fixing and improving their tests?

It seems like the problem here was bad testing and micro repos, not
microservices.

~~~
dgritsko
I agree, and their conclusion even features this salient bit:

>However, we weren’t set up to scale. We lacked the proper tooling for testing
and deploying the microservices when bulk updates were needed. As a result,
our developer productivity quickly declined.

My impression after reading this post was that microservices were symptoms of
problems in how their organization wasn't set up to implement them
effectively, rather than the actual cause of those problems.

~~~
matwood
Yeah, multiple services are going to be challenging if an organization has not
already setup fully automated CI/CD pipeline(s).

------
jarfil
The whole article reads like BS to me.

So the initial problem was a single queue? Well, then split the queue, no need
to go all crazy splitting all the code.

Switching to 100+ microservices? There is no need to switch to 100+ repos too,
runtime services don't need to have one repo per service, just use a modular
approach, or even feature flags.

100+ microservices, some of them with much lower load than others? Then
consolidate the lower load ones, no need to consolidate "all" of the
microservices at once.

Library inconsistencies between services? No, just no, always use the same
library version for all services. Automate importing/updating the libraries if
you need to.

A single change breaks tests in a way you need to fix unrelated code? WTF,
don't you have unit tests to ensure service boundary consistency and API
contracts?

Little motivation to clean up failing tests? Yeah... you're doing it wrong.

Only then you figure out to record traffic for the tests? HUGE FACEPALM,
that's the FIRST thing you should do when dealing with remote services!

~~~
thomasfromcdnjs
I must agree. Seems like every step could have been solved without jumping to
the conclusion that it was because we didn't use a monolith.

I've never understood the false dichotomy of microservices vs monolith... just
split things when it makes sense. ¯\\_(ツ)_/¯

~~~
Cthulhu_
"when it makes sense" isn't a science though; for them it made sense at the
time, like how now it makes sense to move back to a monolith.

~~~
sieabahlpark
Except it wasn't a good idea when they did it either. It's trying to find a
way to use a specific solution rather than finding the correct solution.

------
jacquesm
100's of problem children sounds like a step too far.

A few services > monolith

monolith > 100's of services.

The big trick with any technology is to apply it properly rather than
dogmatically and if you are breaking up your monolith into a 100's(!) of
microservices you are clearly not in control of your domain. That's a
spaghetti of processes and connections between them rather than a spaghetti of
code. Just as bad, just in a different way.

~~~
matwood
Great point. I like separate services, but would cringe at 100s of services in
any system I have seen.

~~~
spyspy
I can't believe this made it out of any planning meetings.

Engineer 1: "We'll have one repo per downstream service."

Engineer 2: "But we have hundreds of those, so now we have to manage hundreds
of github repos???"

Anyone sane: "That doesn't sound right, we should rethink this..."

~~~
jacquesm
I've seen worse. And I wished that I was kidding.

~~~
lagadu
Storytime?

~~~
jacquesm
Every microservice on its own little cluster of VMs for HA and performance...

A couple of hundred VMs is nothing in a scenario like that. Good luck trying
to debug anything.

~~~
organsnyder
That's basically how microservices are operated on orchestrators like
Kubernetes—just substitute "container" for "VM", which is a mostly-academic
difference from the perspective of your application. Operations
tooling—distributed tracing, monitoring, logging...—is essential.

------
spyspy
> While our systems would automatically scale in response to increased load,
> the sudden increase in queue depth would outpace our ability to scale up,
> resulting in delays for the newest events.

This strikes me as the core of their problem, and every step taken was a way
to bandaid this limitation. Would the cost of moving to faster-scaling
infrastructure been so high as rearchitecting the entire system?

> When we wanted to deploy a change, we had to spend time fixing the broken
> test even if the changes had nothing to do with the initial change.

This seems like a separate and even larger problem. Changes are breaking tests
for unrelated code areas? Is the code too tightly coupled? Sounds like it. The
unit tests are doing exactly what they're designed to do. Hard to feel
sympathy for the person who's breaking them and then trying to figure out a
way to sidestep them rather than fix the underlying issues.

~~~
dasil003
The job of these services is to transform their internal event format to 140
different output formats. You can imagine that there is a lot of duplication
in the functionality that these services need to do. Are you suggesting that
they avoid any shared libraries and just rewrite the same code over and over
hundreds of times and update them independently?

~~~
spyspy
If making a change to a shared library breaks half the services, should it
have been shared in the first place? It still smells odd to me there’s so much
interdependence amongst the individual services break at will.

~~~
dasil003
Maybe, but it’s conceivable to me that with 140 endpoints growing organically
it would be very hard for an individual engineer to know for sure what should
be abstracted or not. I think it’s a fundamentally hard problem even though on
the surface it seems like it should be simple. Adding arbitrary new
unaffiliated services is exactly the kind of thing that leads to irreducible
complexity and a moving target that is very difficult to design for.

------
dbt00
Reading this post-mortem was very useful, and I appreciate the segment
engineering team sharing it.

It seems like the primary problem causes were flaky/unreliable tests, and
difficulty making coordinated changes across many small repositories.

Having worked on similar projects before (and currently), with a small team
driving microservices oriented projects, I would probably recommend:

1) single repository to allow easy coordinated changes.

2) a build system that only runs tests that are downstream of the change you
made (Bazel is my favorite here, but others exist). This means all services
use the HEAD version of libraries, and you find out if a library change broke
one of the services you didn't think about. This also allows for faster
performance.

3) Emphasis on making tests reliable. Mock out external services, or if you
must reach out to dependencies use conditional test execution, like golang's
Skip or junit's Assume if you can't verify a working connection.

If you still can't build a reliable service with those choices, then it's time
to think about changing the architecture.

------
HeavyStorm
I am a strong believer that microservices is over hyped. I usually resist when
senior management asks for us to use it (that proves the point of hype).

But by reading the first paragraphs of the article you see that the guys from
Segment made a series of grave mistakes on their "microservices architecture",
the most important one being the use of a shared library on many services. The
goal of microservices is to achieve isolation, and sharing components with
specific business rules between them not only defeats the purpose, but results
in increased headaches.

Without deep knowledge of the solution, it's hard to judge, but it seems this
was never the case for microservices. They needed infrastructure isolation
when the first delay issues surfaced, but there wasn't anything driving
splitting the code up.

Sam Newman discusses on his book how to find the proper seams to split
services (DDD aggregates being the most common answer) and it seems to people
are making rather arbitrary decisions.

~~~
inetknght
> _I am a strong believer that microservices is over hyped._

In general, I fully concur: there are very few services that actually warrant
a microservice architecture.

------
the_arun
Micro Services solve for logical smaller teams, speed, deployment isolation &
ton of other problems. Every solution comes with a trade off. There is no
perfect solution. It is up to us to decide whether we need a monolith or micro
service for our need & use case instead of comparing them.

~~~
cortesoft
Not only that, but there is a lot of room between a 'monolith' and 'micro
services'. How about medium services? You break somethings up and leave other
things combined.

~~~
sbov
Some people call this... microservices! Common advice is that a microservice
should align with a bounded context in domain driven design, which can involve
a LOT of code.

Many large companies have millions of LOC behind their microservices. Your
average startup probably doesn't.

------
badasstronaut
Personally, I don’t like the terms ‘microservices’ or ‘nanoservices.’ What’s
the value add in describing the relative size of the service? The _domain_
should drive what becomes its own service. Every service should handle the
business logic within a particular domain. It’s definitely a goldilocks
problem, though, in that there’s a too-small and too-large, and we’re looking
for the just-right fit!

~~~
jacquesm
That's exactly it. But when you start doing 'microservices' you get the
architecture astronauts who go and see into how many silly little services a
monolith can be broken up. The end results are as predictable as the original
monolith, both end up as an unmaintainable mess in a couple of years.

I predict the same will happen to the 'superstar', it just isn't old enough
yet (and at least it was built with some badly needed domain knowledge).

------
framebit
"2020 prediction: Monolithic applications will be back in style after people
discover the drawbacks of distributed monolithic applications." -Kelsey
Hightower on Twitter

[https://twitter.com/kelseyhightower/status/94025989833123840...](https://twitter.com/kelseyhightower/status/940259898331238402?lang=en)

~~~
jspash
I'm waiting for the day that this becomes a standard refrain: "Javascript.
What were we thinking?!"

~~~
chrisco255
Been using JS for years and still loving it. The only thing I would move
towards from here is some kind of ML like Elm, Reason, Haskell, etc. Certainly
wouldn't go back to Java.

------
dpark
Am I understanding correctly that they had 3 engineers and >140 microservices?
Microservices definitely have their own costs and tradeoffs, but 140 services
and 3 engineers sounds like just a terrible engineering choice.

~~~
hobls
Agreed. "Micro" is a terribly defined term, and it sounds like this team went
absolutely nuts in one direction. (And then in response to their problems,
went as far back as possible to a single monolith.) This suggests a bit of a
lack of nuance in their decision making process to me.

I'm not interested in figuring out exactly what the right marketing term for
it is, but I've had good experiences with teams of 6-10 engineers owning
something like 2-5 services with a larger ecosystem of dozens to hundreds of
services. Of course, I've been working at very large companies with extremely
high traffic for several years now, so my experience is skewed in that
direction.

If I had three engineers on my team I'd be unlikely to end up with more than a
small handful of services. Half the benefit of splitting up your services has
to do with keeping your independent teams actually independent -- if it's just
one team then that isn't a problem in the first place.

~~~
matwood
> If I had three engineers on my team I'd be unlikely to end up with more than
> a small handful of services.

We have a small team and have a handful of services. We also have a fully
automated CI/CD pipeline. It's worked really well. I doubt any would be
considered 'micro', but instead they are designed around functional areas like
authentication or backend processing.

~~~
hobls
> I doubt any would be considered 'micro', but instead they are designed
> around functional areas like authentication or backend processing.

Yeah, who knows what micro means, but that's exactly how I like to split up
services. If at some point a service gets too large, split it up. Hundreds of
services out the gate is a gross premature optimization. (And like most
premature optimizations, ends up costing much more time both in development
and in maintenance.)

------
khalilravanna
This might come off as being snide, but I'm genuinely curious: was their
solution really just having all the services in one repository? That doesn't
seem like a problem with microservices at all but more of a devops problem. To
be clear, I'm not arguing for microservices, I'm just trying to understand if
this was really a problem with splitting off multiple repos. Maybe I'm just
really dense and someone can set me straight.

I've actually had experiences with seemingly this same problem at a previous
startup. Once we started spinning off individual repositories for small pieces
of business logic stuff started to go downhill as the logistics of
communicating and sharing one another's code became more and more complex.

~~~
avarun
It seems like splitting off small repos for everything is a solution looking
for a problem. Some of the most successful software companies out there have
monolithic repos, but not monolithic services.

The real solution is microservices in monorepos.

------
tutfbhuf
If you have 140 somewhat similar entities that all share common code, then
they don't fullfill the very important microservice criterion of being
independent. In your case, I would recommend to use a plugin based system. Do
it the other way around, have 1 application that contains the common code
(previously shared library code) and create 140 plugins. This way you can
update the single application, load all plugins, execute the tests, check if
everything is fine and deploy the application. Every plugin can live in its
own repository and can be versioned separately, but a new version can only be
deployed if it works with the latest version of the application.

~~~
kyberias
This is the voice of an adult developer talking.

------
r00tanon
This coincides with my own experiences in the financial sector. Distributed
computing is undoubtedly the way to scale, but the trick is making the
distributed nature of the system completely invisible (or as much as possible)
to the developers, the applications and the supporting staff.

I have seen this phenomena of thinking that a large system, broken down into
tiny parts, is somehow easier to manage time and time again over 30 years of
development. In every case, the one thing the central thinkers fail to realize
is that complexity is like conservation of energy - it can be transformed, but
it cannot be destroyed.

Also, when it comes to large teams I have seen one thing work when it comes to
sharing a resource(s) critical to a larger system - shared pain. If the
central/reusable code/service breaks everyone's stuff, then everyone forms a
team to immediately address the problem before continuing on. The solution is
almost never "find a way to let the other teams continue while something
important is on fire." It seems like a major motivation for a microservices
architecture seeks to avoid the pain - which perhaps is not the best reason to
use microservices.

I like the idea of unseen, but indispensable, complexity. For instance, the
human brain is probably the most complex thing in the world, but the interface
is fairly simple :)

------
andy_ppp
Oh man, this is just the tip of the iceberg with Microservices. There is
nothing Micro about them, they are so difficult to deal with that it becomes
impossible to actually iterate or build user value and introduces loads of
difficult to debug problems.

We have architects here that dictate the design of the system but IMO they
have not done the simplest implementation of anything. We have Kafka to
provide ways of making each service eventually consistent so we delete
something out of our domain service, but it requires absolutely huge amounts
of code in various different other services to delete things in each place
listening for events. Every feature is split across N different services which
means N times more work + N times more difficult to debug + N times more
difficult to deploy.

The system has been designed with buzzwords in mind - Go and GRPC have been a
disaster in terms of how quickly people have developed software (as has
concourse - so many man hours wasted trying to run our own CI infrastructure
it's unreal), loads of small services that are individually difficult to
deploy and configure (and come with scary defaults like shared secret keys for
auth - use a dev JWT on prod for example). The difficulty in dealing with
debugging the system - there simply aren't the tools to understand what is
going wrong or why - you have to build dashboards yourself and make your
application resilient to services not existing.

Never ever try to build Microservices before you know what your customers
_really_ want - we've spent the last 6 months building a really buggy CRUD app
that doesn't even have C and D fully yet. Love your Monolith.

~~~
organsnyder
Ugh. Sounds like a severe case of resume-driven architecture.

In my experience, you won't know whether you need microservices until you're
on at least v2.0 of your application. By then, you have a better understanding
of what your real problems are.

~~~
andy_ppp
Resume-driven architecture will benefit me assuming I every want to waste this
much time writing a delete documents method (4 people on my team worked on it
on and off for a month).

------
ris
I'm not sure how worthwhile it is writing any more "microservices are dumb"
articles - all the people who have spent the last 5 years leaving microservice
messes in their wake appear to have moved on to creating "serverless" messes
of lambda functions which people like you and me are going to be going around
tidying up in about 5 years from now.

------
e67f70028a46fba
As with J2EE EJBs, microservices conflates two things:

\- a strong API between components

\- network calls

The former is a very good idea that should be implemented widely in most code
bases, especially as they mature, using techniques like modules and
interfaces.

The latter is incredibly powerful in some cases but comes at a huge cost in
system complexity, performance and comprehensibility. It should be used
sparingly.

------
topicseed
I think that with the rise in popularity of functions as a service (lambda,
gcf, azure), we are heading more and more towards nanoservices.

Small services are easier to develop with several teams, in my opinion. Each
team knows what to input, and output. They can do whatever in between as long
as these two contracts are respected.

But the overlooking of all these moving pieces changing at different paces is
tough. And the smaller the services get, the harder it will become.

Not a size fits all, clearly!

~~~
madamelic
>I think that with the rise in popularity of functions as a service (lambda,
gcf, azure), we are heading more and more towards nanoservices.

There are some pretty big asterisks next to running "nanoservices". Mostly how
expensive they actually are to run at large scale and the weird caveats that
can happen due to them not always being up.

And I wouldn't advocate for "always-up nanoservices".

The basic answer to both "nanoservices" and microservices is do what you think
is right but don't go too far. There are good reasons to make a nanoservice
and good reasons not to, same with microservices.

------
taeric
Not that I disagree with microservices easily going awry, the problem here
seems to be traced to shared library code. Each microservice should be as
standalone as possible. Your contract with that service is the service
contract. Not some shared library.

As soon as you have shared library, you now have coordinated deployments. And
that is just not fun and will cause problems.

The trick here is that this does mean you will duplicate things in different
spots. But that duplication is there for a reason. It is literally done in two
places. When you update the service, you have to do it in a backwards
compatible way. And then you can follow with updates to the callers. This
makes it obvious you will have a split fleet at some point, but it also means
you can easily control it.

~~~
ec109685
There is commonality across those 140 services that they extracted into a
shared library so they weren’t copying that commonality 140 times.

~~~
cestith
The fix, then, is to put that commonality into a new microservice the other
microservices call.

The more I read about the problems people have with microservices, the more
I'm convinced they've never read about flow-based programming.

~~~
ec109685
Doesn’t that get excessive, making network calls to do what could be more
naturally expressed as a method call?

If you have two different teams collaborating or you expand beyond what a
single box can do, create services. But if you can express things reasonably
as a single service, why make things more complicated and error prone?

~~~
cestith
They were already breaking things into separate services. They violated DRY by
putting the same or similar code into all of them. Then instead of cleaning up
the DRY violations by refactoring into a shared service or refactoring into a
shared library as a static asset, they factored it out into a mushy shared
library still in lots of flux.

Whether they were wanting to factor into another service with a defined and
mostly static API or a common library with a defined and mostly static API, it
was a failure to factor common code into an amorphous blob that gates the
release of all the other services. Instead, they've de-modularized the code
and called that a success.

If you're drawing hard lines between services and having them talk to one
another, having one of these models for at least parts of that makes a lot of
sense: * filter system of the flow-based nature * REST API that does a
transformation and returns transformed data * message broker with
producer/consumer model where the consumer of one queue does a transform and
puts the data into another queue * a full actor model * a full flow-based
model

------
SE4L
Seems to me the problem is the shared libraries. Yes, without sharing it means
you have to repeat a fair amount of code, but in most cases the representation
that each service cares about is not necessarily the same, which reduces the
value of these shared libraries. It seems that they would have solved a lot of
the really critical issues by simply not sharing as much code.

~~~
hinkley
I maintained the shared library for five teams trying to move data around.

The biggest challenge is making the shared library forward and backward
compatible with itself for at least a few releases in either direction,
because not everyone will redeploy at the exact same moment.

If you can't solve that problem everything gets painful. Doing that right was
the second hardest part of that job (meetings were the hardest). The job title
(securing the data interchange) came in third place.

------
lostcolony
I'm hardly a microservice apologist, but sharing code or data across services
is a major smell. It means these things are related, and should likely be
bundled together. Don't just break a service apart because it's de rigueur.

------
scarface74
I don’t see any reason you shouldn’t always design a system as domain specific
micro services.

Now those micro services shouldn’t always be out of process modules that
communicate over HTTP/queues, etc. A microservice can just as easily be
separately compiled modules within a monolithic solution with different
namespaces, public versus private classes, and communicate with each other in
process.

Then if you see that you need to share a “service” across teams/projects, or a
module needs to be separately, deployed, scaled, it’s quite easy to separate
out the service into versioned packages or a separate out of process service.

------
grosjona
It's refreshing to read an article which challenges common wisdom.

I've endured a lot of suffering at the hands of the microservices fan club.
It's good to see reason finally prevail over rhetoric.

It would have been nice if people had written articles like this 2 years ago
but unfortunately, people with such good reasoning abilities would probably
not have been able to find work back then.

Software development rhetoric is like religion. If you're not on board you
will be burned at the stake.

So many times during technical discussions, I had to keep my mouth shut in the
name of self-preservation.

~~~
orcasauce
> So many times during technical discussions, I had to keep my mouth shut in
> the name of self-preservation.

This sounds like an issue with being able to articulate why something is or
isn't going to net the expected benefits or being able to foresee unexpected
risks. Keeping silent is better than throwing out silly hyperbole risks, but
not bring up real risks because "they don't want to hear it" is completely
bogus. Any solid engineer will bite at another potential risk to ensure they
don't find themselves engineered into a corner 65% through a project. Your
comment also makes it out like the notion the article is making, monolith over
microserves, is gospel for every situation; that in no condition would it ever
make sense to use microserves and that only naive zealots would espouse the
wisdom (dogma) to use them. You can use any piece of technology poorly, that
doesn't mean the core concept is flawed, just that your problem space is
different than what that software is trying to solve. Consider using HDFS as a
primary data store in place of MySQL where it doesn't make sense and you might
cry the wisdom of wishing someone had told you HDFS is terrible and to just
use the tried and true MySQL of olden days.

~~~
grosjona
>> This sounds like an issue with being able to articulate why something is...

The issue is not articulation of ideas; the issue is that when all the books,
all the articles and all people believe that something is true, there is no
amount of articulation which will be able to convince them otherwise.

You have to wait for the hype to go away before even considering bringing up
the argument.

------
staticassertion
Didn't take long for "we used shared libraries, and then found out we couldn't
deploy independently". Sounds like you weren't quite doing microservices?

~~~
caoilte
quite.

------
justinzollars
I knew this day would come. We have come full circle. Also I really like this
comment:

> You push data consistency concerns out of the database and between service
> boundaries.

------
acroback
This is a classic case of not understanding micro services and trying to fit a
problem around a tool.

At work, we have close to ~50 services(no one calls them microservices), but
they do not suffer from this brittleness. We segregate our services based on
languages. So, all C services go under coco/ , all Java services go under
jumanji/ , all go services go under goat/ , all JS services go under js/. This
means, everytime you touch something under a repo, it affects everyone. You
are forced to use existing code or improve it, or you risk breaking code for
everyone else. What does this solve? This solves the fundamental problem a lot
of leetcode/hackerrank monkeys miss, programming is a Social activity it is
not a go into a cave and come out with a perfect solution in a month activity.
More interaction among developers means Engineers are forced to account for
trade offs. Software Engineering in its entirety is all about trade offs,
unlike theoretical Comp Science.

Anyway, this helps because as Engineers we must respect and account for other
Engineers decisions. This methods helps tremendously to do this. No one
complains, everyone who wants 1000 more microservices usually turns out to be
a code monkey entangled in new fad, or who doesn't want to work with other
Engineers.

You want to use rust? There is a repo named fe2O3/, go on. Accountability and
responsibility is on your shoulders now.

If you think about it, an Engineer is tied to his tools, why not segregate
repos at language level instead of some arbitrary boundary no one knows about
in a dynamic ecosystem?

~~~
ljm
I'd wager that microservices, a lot of the time, are basically used as a
management structure rather than for their benefits as pure tech, so less
mature teams can silo themselves off and avoid communication (e.g. "I can work
just on my backend image processing bit without dealing with the React guys
now", "now the CTO won't be on my back so much," or whatever).

The irony being that anything approaching SOA (or microservices) requires
exactly the same amount of communication. More likely they require more since
it's almost certain that such a decision introduced chaos.

~~~
johndubchak
I think what was missed, in the article, is that the fundamental problem was
centered around a shared architecture of destinations and shared code.

You cannot possibly have every destination be a separate repo and then have
the development lifecycle of your shared code be so active that it ultimately
puts at risk the architecture of your entire organization.

What makes shared code so perfect is having stability such that you extract
your variant code into your non-shared code. Shared code should evolve at a
much slower pace than your non-shared code or you risk this very outcome.

Microservices are not dead, nor are they the solution to everything. We need
better architects.

~~~
mikekchar
> ... the fundamental problem was centered around a shared architecture of
> destinations and shared code.

From an architectural perspective, there is absolutely no difference between a
micro service and a library. The only real difference is in the dispatch
mechanism.

The problems around configuration management are the same problems we've had
as programmers for decades. It's just that the people who are keen on micro
services are usually not old enough to have experienced the pain in a
different context.

Should shared libraries be in different repos, or should you put everything in
a single repo? How do you deal with versioning? What happens if one app wants
version 1 of the library and another app wants version 2? How do you deal with
backwards compatibility of the API? Do you make a whole new library when you
decide the old API is incompatible with the new vision? Blah, blah, blah,
blah. None of this is a new problem.

I can't remember which version of Windows it was (maybe 7?) where they were
seriously delayed mainly because they had so many programs using different
versions of libraries. Integrating it at the end was apparently complete hell.
Since they wanted to have separation of responsibility in their groups, each
group was just pounding away implementing the features that they needed, but
not integrating as they went -- because that would mean lots of cross team
communication. The exact same thing is likely in a large organisation with a
ton of micro services.

There isn't just one way to solve the problem. Mono repos and monoliths help
in certain ways and cause problems in other ways. There are other techniques
as well (should we implement ld.so for micro services? :-) ) But as you
mention, the real answer is that the solution requires humans, not technology.

~~~
ronjouch
> _" should we implement ld.so for micro services? :-)"_

Working exactly along those lines this week on multiple services exposed over
REST APIs, I was wondering if tools exist to check compatibility between them.
Said differently,

\- I have a `swagger.yaml` for my service A managing chipmunks and it says
endpoint `/chipmunks` supports an 'color' query parameter.

\- In service B, I have a `handleToServiceA` that encapsulates calling A. Then
say I write `const chipmunks = handleToServiceA.getChipmunks({'colour':
'blue'})`.

Are there (whatever the ecosystem) tools that would read serviceA's
swagger.yaml, detect my error in serviceB (color -> colour) and report the
issue at compile time rather than run time?

~~~
aeonsky
I think this is one of the standard options:
[https://docs.pact.io](https://docs.pact.io)

------
ascendantlogic
What's old is new again. The inevitable rewinding of the 2012 architecture
"revolution" starts now. I look forward to people in 2026 saying "monoliths
are bad, split everything out to services"

~~~
cirgue
"You may not be interested in the dialectic, but the dialectic is interested
in you."

------
acd
I have had thinkings on how efficient Micro services are compared to a
Monolith.

In a micro service architecture, the request bounces through different service
layers, json serialization, network transfers.

In a monthlith normal application, especially if its on one machine, the
request touches main memory and then CPU cache.

Latency Numbers Every Programmer Should Know, I especially think of the Send
1K bytes over 1 Gbps network vs cache latency of microservices/monolith.
[https://gist.github.com/jboner/2841832](https://gist.github.com/jboner/2841832)

Ie instead of micro services 10000 ns for a network transfer you could fetch
within 10ns from CPU cache in a monolith, that is 1000 times more efficient.

Are we beyond the peak of inflated expectations on Micro services and towards
the Plateau of productivity?
[https://en.wikipedia.org/wiki/Hype_cycle](https://en.wikipedia.org/wiki/Hype_cycle)

------
ChicagoDave
I liked the rundown and agree there are tactical benefits for building
monolithic API's, but one of the core tenants of microservice architecture
coming out of the domain driven design space is that monolithic systems hide
business logic. I'm pretty sure by moving everything back to a monolith,
you've abandon DDD entirely. From a strategic perspective, that's bad for your
business.

If you get your DevOps ducks in order, a lot of the issues you have with
standing up and maintaining individual microservices should be manageable. I
certainly understand the pain of dependency management, but that's also a part
of good architecture.

I'm willing to hear more of these stories though. We can always learn about
edge cases or even new paradigms that come out of current thinking.

------
akerro
Microservices are damn hard to implement correctly. Think how much math and
algorithms must there be to correctly, test, version your model and API, load
balance, gather logs, collect logs, archive logs, grep in logs, setup
VMs/Dockers, DNS+DHCP and firewalls, load balance databases (master/slaves?)
and your services, horizontal scaling maybe?, manage privileges and access,
manage disk space, store and collect user-uploaded/generated files, isolate
environments...

If you don't understand these concepts and how much work must be don't to
correctly implement microservice architecture - you SHOULD STAY with
monoliths, they are much easier.

[http://principlesofchaos.org/](http://principlesofchaos.org/)

~~~
sobani
All of these things you mentioned are either also required for a monolith
(like DNS+DHCP) or are required for neither (isolated environments).

Does that mean that because of all the work required to properly set those up
for a monolith, we better stay with microservices?

------
catchmeifyoucan
I wonder what language they're using?

> if a bug is introduced in one destination that causes the service to crash,
> the service will crash for all destinations.

This sounds dangerous. If I was a destination provider, and returned some
garbage, could that take system down?

Also, reading up on centrifuge, instead of breaking up by source, destination;
would it be more spatially conservant to create queues by response type, which
are finite. The entire time I was wondering why bad requests are put back into
the same queue they came from. Shouldn't they be isolated? You could then
treat those isolated requests as one unit, so in a destination outage, your
not backlogged on your actual compute resources for good requests.

Nonetheless, seems like just another day in an engineers workday :)

~~~
guu
The examples are written in JavaScript but it also looks like they use Go. Not
sure which language this service is using specifically.

~~~
woahthere123
This particular destination service is written in JavaScript.

------
heme
_... step back and embraced an approach that aligned well with our product
requirements and needs of the team._

This ^^^. Everyone has an opinion on the Internet... frontend, backend, no-
end. It's probably not going to align perfectly with your needs (are you
Google/Facebook?) . There are a lot of tools & specs out there. If your
blindly following someone else's opinion without truly understanding your
users, product, & team you'll probably accrue debt.

Also, devs often forget the team. If your choice of tech cuts the team's
velocity in half then it probably wasn't a good choice... even if it had some
other technical benefits.

------
dpeck
Microservice based architecture will continue to be used quite a bit as many
weaker engineering leads have latched onto it as a way to solve interpersonal
problems on their team and letting everyone have their own little fiefdoms. It
creates a terrible scenario and a very dysfunctional team, but the hype cycle
and a few high scale orgs mentioning it means its part of the "mainstream" now
and something that ill-informed leads will be pushing for the next decade,
whether it makes sense for the specific context (it doesn't) or not.

------
trhway
>The first item on the list was to consolidate the now over 140 services into
a single service.

software engineering walks in circles. NoSQL people seem to be growing up and
learning about consistency and transactionality. I'm waiting for somebody to
discover threads and shared variables back as a revolutionary way to improve
performance and greatly simplify the implementation of a system of "actors".

Joking aside, i think greatest reason for Segment's microservices failure was
that they didn't use Kubernetes.

------
jb3689
Use Erlang/Elixir. You get the benefits of the monolith _and_ the benefits of
the microservices

------
stcredzero
_With every destination living in one service, our developer productivity
substantially improved. We no longer had to deploy 140+ services for a change
to one of the shared libraries._

I say this as someone who worked in an environment widely castigated for its
"monolithic" nature: It sounds like trying to have "modularity" hurt you,
because you didn't actually have it in the first place. When you stopped
pretending, the pain went away.

------
ben509
A number of comments touched on this, that microservices are also an
organizational strategy: you have a team that manages that particular service.

My company is probably typical in that we implicitly use microservices because
we have consume dozens of microservices _provided by other companies_.

We only have a handful of services we maintain, each with dedicated engineers.

And what becomes a service? You want to answer two questions:

1\. Does it have a concrete business justification? 2\. Does it have clear
functional requirements?

If it has a business justification, it will get people assigned to keeping it
running. If it has clear functional requirements, it will make sense to the
people working on it which service does what.

That's still pretty vague, so you want to look at who has done it well, and
why it worked for them.

Companies like AWS have been extremely successful in using microservices
because every last one service has a business rationale, it's either directly
making money (EC2, S3, SQS) _or_ it supports the needs of customers who are
using a service that makes money (VPC, Cloudformation, all their internal
auth, billing, provisioning, security and such).

The caveat there is that the big B2B service providers are not a great model
for companies that have a lot of business logic.

------
franzwong
Going back from 100+ microservice to 1 monolith is another extreme to
me...Perhaps you can have 10 macroservices.

And shared library shouldn't contain business logic...

~~~
romanovcode
> And shared library shouldn't contain business logic...

What's the point then? They become libraries then, not services.

~~~
franzwong
I would put the business logic inside the service rather than inside shared
library. For shared library, I usually put utility functions.

------
lagadu
Interesting read, my biggest complaint is: confusing unit testing with
integration and system testing. On every build you only run unit testing, you
don't rely on any external dependencies. If your test suite relies on querying
external servers, you're not testing your software, you're testing your
server, your internet connection, your credentials, their internet connection
and their servers. That's insane to run on every build! That kind of system
testing should only happen whenever you're preparing to deploy a release which
isn't something that happens everyday because of how hard it is to properly do
system testing.

edit: also it seems they're missing a solid release cycle and manager. For
every release before going to production there's a TON of testing that needs
to be conducted, they're mentioned over 40 improvements this year, that's two
releases a week. It's not possible that you're properly testing each of those
releases. Had they done a single, well tested release with multiple fixes
every 1-2 months, the burden would be significantly reduced.

~~~
dasil003
Terminology aside, why would you not automate all your testing so you can
release more frequently? We're not talking about a martian lander here; for a
SaaS startup velocity is much more important than perfect releases, you will
have more success by optimizing and de-risking for very frequent releases.

------
antishatter
Seems like an application of sturgeon's law, there is a tendency to do what is
new because it is new. A well designed microservice architecture can be a
tremendous boon for a product. A well designed monolith can be a tremendous
boon for a product. About 90% of both are poorly done. It's interesting how
often people favor ANY motion vs motion in the correct direction.

------
FollowSteph3
My view is that microservives is more of a workaround to company culctures and
silos than a software solution. At least in most cases.

------
exclusiv
I found the frustrations with microservices too setting up a pretty
substantial ETL system using Amazon Lambda among other services.

Originally you couldn't trigger Lambda functions from SQS (seemingly the most
obvious integration). You could use Kinesis but small print says Lambda
concurrency is restricted to the Kinesis streams which gets very expensive.

Visibility/monitoring into most microservices is not good (Iron.io is quite
nice but any concurrency is really expensive). I don't like the workflow for
deployment and testing either.

So I shifted to single EC2 instance with my application and Beanstalkd w/ my
own configurable workers. Way cheaper, easier to manage, normal programming
workflow, etc.

For some use-cases Lambda and other services are really nice and efficient but
there's usually a lot of hidden limitations so be sure to spend a lot of time
evaluating before committing. You often spend way more time fighting the
microservice drawbacks than the benefits are worth.

------
Sir_Cmpwn
Microservices aren't a magic bullet and won't save you from a poorly designed
application. You already need to have very good separation of concerns within
your application to make it out of microservices, and at that point an API
that communicates over HTTP isn't much different from one that communicates
over the application stack.

------
l1ambda
Microservices typically have two goals, performance and modularity. However,
porting a typical webapp to a fast, modular compiled language will typically
achieve at least one (often two) orders of magnitude performance improvement
over a typical interpreted language. We are seeing this to be true more often
than not, and a large performance gain off the bat like that may even obviate
a lot of the desire to move to microservices.

Furthermore, if one uses modules (as one should), one can arbitrarily and
somewhat trivially run those modules either in-process (compiled in) or out-
of-process (via REST, gRPC, Cap’n Proto, or another RPC system), e.g., in a
separate service/microservice/whatever you want to call it. This gives you a
best of both worlds approach where code can be arbitrarily run in a monolith
or a separate service as-needed. This changes the thought dynamics from a
rigid "monolith vs. microservice" decision to a more fluid process where
things can be rather easily changed on a whim. When modularity is the goal,
then services become something of a secondary concern.

Microservices are used as something of a sledgehammer to force modularity and
performance in languages that lack proper modularity and/or are innately slow
or otherwise inefficient, while suffering orchestration costs and the
performance penalties of copying data across multiple processes and networks
as well as making it harder to derive a single-source of truth in some cases.

Probably a good approach for a typical webappp looking to improve performance
would be to first port core logic to a modern, fast, compiled language with
modules, then evaluate the performance from there, and then determine if any
modules should be split out into separate processes or services.

Like NoSQL, microservices can be (but not always are) a case of the cure being
worse than the disease; however, they can also be useful in certain situations
or architectures. Like anything in engineering, there are tradeoffs and it
depends on your situation.

------
imajes

      - All teams will henceforth expose their data and functionality through service interfaces.
      - Teams must communicate with each other through these interfaces.
      - There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
      - It doesn’t matter what technology they use.
      - All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
      - Anyone who doesn’t do this will be fired.  Thank you; have a nice day!
    

still kinda works.

------
iamleppert
Microservices are an organizational and design choice that is intended to
mirror the structure of the teams or engineers, the actual human beings doing
this stuff.

It seems like Segment didn’t really understand this at all, and instead
decided to have seemingly arbitrary and rediculous service boundaries that had
no relationship to the real world.

See also people who create poor abstractions in their code and other sources
of technical debt.

This isn’t an article about how microservices architecture is somehow bad,
it’s an article about how bad Segment’s engineering team is. You can make the
same argument for anything. At the end of the day some architecture or process
or programming language or tech can’t replace real thinking about your problem
and correct application.

~~~
halayli
Nothing you said addresses the issues they mentioned:

> In early 2017 we reached a tipping point with a core piece of Segment’s
> product. It seemed as if we were falling from the microservices tree,
> hitting every branch on the way down. Instead of enabling us to move faster,
> the small team found themselves mired in exploding complexity. Essential
> benefits of this architecture became burdens. As our velocity plummeted, our
> defect rate exploded.

~~~
WaxProlix
That doesn't really say anything though: "Things were bad, and it felt out of
control, because of microservices. They didn't work, and actually made things
worse and more complex. The things that were supposed to be good were bad. We
didn't do good work, we did bad work, and slowly."

How do you address something like that coherently? I sort of agree with the
OP, it kinda sounds like they just didn't know what they were doing. Or maybe
that quote wasn't really the meat of their complaint?

------
ahallock
I don't think you reverted to a monolith. I think you built a singular service
with more cohesion than when it was split into 140 repos. And it sounds like
the split wasn't because of the microservices architecture but because you
didn't have a good solution for the queue problem.

------
skyisblue
Microservices benefits large teams. There’s just too much operational overhead
for a small team to manage microservices. Large teams with monolith
applications benefit from microservices as the communication issue of large
teams is reduced by splitting the team into cross functional teams.

------
riknos314
Working at a large tech company, we tend to approach microservice division as
information authority division.

If the data doesn't need to exist in the same database, put it in a new
service with a separate database (or at least completely independent tables).

Ideally there is no shared code between services (and there shouldn't need to
be, because each service owns completely disparate data), so the only coupling
is the API definitions.

Each service is free to use whatever internal architecture they see fit as
long as they honor the API definitions they provide to their dependent
services.

In the case outlined in the article, the fact that each microservice had
similar enough concerns to all use some common libraries and the same database
makes me doubt that this should ever have been built as microservices to begin
with.

------
eximius
> The shared libraries made building new destinations quick. The familiarity
> brought by a uniform set of shared functionality made maintenance less of a
> headache.

> However, a new problem began to arise. Testing and deploying changes to
> these shared libraries impacted all of our destinations. It began to require
> considerable time and effort to maintain. Making changes to improve our
> libraries, knowing we’d have to test and deploy dozens of services, was a
> risky proposition. When pressed for time, engineers would only include the
> updated versions of these libraries on a single destination’s codebase.

 _When pressed for time, engineers would only include the updated versions of
these libraries on a single destination’s codebase._

I think I see the problem and it wasn't with microservices.

------
perlgeek
Two things come to mind:

* splitting everything up into separate repos and services seems like a pretty radical move. You could start with separate queues that are handled by a single service, potentially with many instances, or you could try to break out a few parts that change often or have a very high load

* A big chunk of the complexity seems to be in transforming one message format to another. That is something that should be very easy to write tests for. So you need a CI that tests all the services when you change the base libraries, and then it's pretty easy to find out if a change to a base library is backwards compatible or not. And for libraries that are shared between many services, you should mostly stick to backwards compatible changes.

------
mdekkers
It appears to me that this isn't so much a case of "microservices don't work"
as it is the case of several poor architectural choices. It is always really
easy to be part of the crowd in the peanut gallery and say "hah, you are doing
it wrong", but when actually at the coalface the pressures and perspectives
are different. Having said that, the microservices story isn't half as
interesting as the Centrifuge story they link to in the article. Their
approach (queues for everything) didn't work for them at their scale, so they
invented something different (Centrifuge) that does. Poorly thought-out
choices around microservices has little to do with the issue they solved.

------
grizzles

      +
      -js
      -java
        -monolith.jar {guava, netty, etc }
          -svc1
          -svc2
      -python
      -go
    

For biz code, I've seen this kind of lib-ifying architecture provide a nice
microservices workflow. The unifying heuristic is: Any dependency goes into
the lib project. Anything unique to the service that requires no dep should
usually go into the service project. The nice thing about this is it modular
with fast compiles and preserves optionality. Since everyone is using the same
deps, the code can live in the svc project or the lib project. A bit of
namespacing convention makes it's trivial to shuttle the code between the two
projects to wherever it's most natural to have it.

------
abc_lisper
Ahh.. The sweet gong of the software hype pendulum

------
mnm1
3 full time engineers. No wonder they crashed and burned with microservices.
Even a slightly distributed system, what I call miniservices, with an api and
login server and half a dozen regular apps on top is way too much for such a
small team. In a similar, but much smaller setup, I estimate at one point our
two person team was spending as much as 25% of their time on cross-service
(between the app and api / login server) concerns that would simply not exist
with a monolith or something not distributed. That's 3 months out of every
year of development time wasted on concerns that shouldn't even exist.

------
twblalock
> Our initial microservice architecture worked for a time, solving the
> immediate performance issues in our pipeline by isolating the destinations
> from each other. However, we weren’t set up to scale. We lacked the proper
> tooling for testing and deploying the microservices when bulk updates were
> needed. As a result, our developer productivity quickly declined.

Well, I'm not surprised. Microservices are made possible by advances in
automated testing, CI, and deployment. You should have those things anyway,
even if you have a monolith -- but to go to microservices without them is a
pretty bad decision.

------
kyberias
It's as if these people just take a buzz-word and start applying without ANY
consideration about basic stuff about distributed systems. Are there any
adults around in these firms or what is going on?

------
teilo
One of the things that I like about the Erlang/OTP ecosystem is that it
encourages you to design systems that are very modular, but does not force you
to run every service as an independent entity. By designing around supervision
trees of dependent services, one gains the advantages of a monolithic
environment without losing the ability to split services out onto discrete
nodes.

But that's an "all in" environment. You are Elixir/Erlang/OTP, or you are out.
That is, understandably, not an option for many use cases.

------
sciurus
I like that they highlight the tradeoffs between isolating faults and
optimizing resource utilization. However, you don't need microservices to
achieve isolation. You can have different worker pools of the monolith
configured to handle different destinations. The monolith actually gives you
more flexibility in how you approach the tradeoff. With microservices you're
forced into one pool of workers per destination, but with the monolith you can
choose any mapping of pools to destinations that makes sense.

------
tybit
“Recall that the original motivation for separating each destination codebase
into its own repo was to isolate test failures. However, it turned out this
was a false advantage. ”

At least they finally got to the right conclusion, they were on the wrong path
to begin with.

People seem to be blaming Microservices when they weren’t even close to
understanding what they were doing and why. I’d be much more interested in an
article about issues faced with Microservices where they actually tried to
slice their functionality based on their domain.

------
cnlwsu
They took right approach. Don't treat it like a religion and do what works
best for your team and project. There is no single perfect model so be willing
to change if it makes sense.

------
man2525
I'm thinking it's more of a software business life cycle. Microservices are
somewhat easier to deploy, which makes it easy for developers to add
capabilities to be used by UX developers. Once the eye candy and rough
functionality is in place, the sales team signs up new clients. The company
draws down investment in developers and puts more money into infrastructure.
The product moves from developer land to system land which is monolithic and
can handle heavier loads.

------
Hendrikto
Sounds like a typical case of hype driven development. When faced with a
problem they threw buzzwords at it, hoping it would go away.

Had they taken the time to explore the root causes of these problems and how
to best approach them, they probably wouldn‘t have taken the microservices
approach to begin with.

I see this far too often. Instead of looking for solutions to problems, people
look for problems to try the new hot solution they read about. Happened with
ML/AI, NoSQL, Microservices, Blockchain, …

------
aniketpant
I probably don't understand their architecture well enough since they don't
try and explain it in detail. The blame is also put on microservices whereas
it's the architecture that can be fixed to improve performance and the cost of
deployment.

A shared library works up until a certain point. If every service uses a
shared library then you are already getting to a world of a monorepo. A
monorepo for different services should work fine if the overall architecture
is feasible.

~~~
adrianmsmith
Surely this is "no true scotsman"? They've tried microservices and it didn't
work for them, you're saying that if only they'd "fixed" their micro-services
architecture it would have worked?

------
Diederich
It's quite possible to do a monolithic app well and correctly. It's also quite
possible to do a big pile of microservices as a cohesive app well and
correctly.

In my professional experience, where I've run into numerous examples of good
and bad examples of both, microservices tend to win because it's a lot easier
to unwind the badly implemented microservices compared to the badly
implemented monolithic service.

Of course, this is just another small set of anecdata.

------
akamaozu
Getting the feeling Segment didn't really stop using microservices.

Sounds like they just redrew their service boundary, from Integration APIs to
Business Function.

Centrifuge sounds like a new service deals with connecting to integration
APIs, so they've replaced 140+ services with one.

Another service they've spun up is Traffic Recorder, and its responsibility is
to eliminate the need for http requests when testing integrations.

Feels like the biggest change is going from so many repos to a monorepo.

~~~
woahthere123
Segment is very much built around microservices. They just consolidated 100+
of these destination services into a single one. That's it.

~~~
akamaozu
Glad you think so too.

The title and opening paragraphs gave me the impression they felt they were
moving away from microservices, but maybe I didn't those bits correctly.

------
sequoia
It seems like their initial problems were:

1\. Tests hitting 3rd party APIs are flakey & slow 2\. The job queuing
mechanism can cause all jobs to be slowed by a single 3rd party API
outage/slowdown

Eventually they arrived at:

1\. Replay responses to speed up HTTP based tests 2\. Create a smarter queuing
mechanism in house

I'm not sure what microservices has to do with any of this. Anyway, kudos to
them for having the courage openness to share their learning from mistakes!

------
nikon
> Once the code for all destinations lived in a single repo, they could be
> merged into a single service. With every destination living in one service,
> our developer productivity substantially improved. We no longer had to
> deploy 140+ services for a change to one of the shared libraries. One
> engineer can deploy the service in a matter of minutes.

... 140 tightly coupled services wasn't the problem then?

------
tracer4201
Too many smart folks that I've worked with, for some reason, just stop
thinking critically when it comes to certain ideas.

I was a product manager on a team of really rockstar developers. They all earn
at least $200k a year.

Instead of demanding more ambitious projects, you could keep about 99% of them
happy by just letting them use the new framework of the week to build their
next web app. Their excitement when they were green-lighted to use react was
mind-boggling. Building another dumb website with the new framework, yay.

Mindlessly applying microservice architecture is the same issue at heart.

~~~
pythonaut_16
Sounds like you really don't understand development if you think getting to
use React isn't a big deal.

React (and others like it, e.g. Vue) really is a huge win and solves a ton of
pain points common to front end development. It still has its own pain points,
but it's hard to overstate how much of an improvement it is over other older
approaches like jQuery.

~~~
mockingbirdy
I think he doesn't have a problem with that.

> Building another dumb website with the new framework, yay.

He's just astonished how excited they are although _what_ they build is
"another dumb website" in his opinion. Many developers love the "how?" and
don't care about the "why?". I don't judge it, but it can explain some of the
excitement. Angular, React and Vue are great for development. But Ember,
Backbone and others were also good back in their times. What he essentially
says is "We use specific technology so they get excited although the problems
we solve are boring af"

~~~
scarface74
The reality is that since the days of the social contract between employer and
employee are long gone - i.e. I show a company loyalty and they will keep me
at least at market rates and not lay me off to “increase shareholder value.

Developers have to use the new and shiny to keep themselves marketable and be
ready to jump ship at the first opportunity or out of necessity.

~~~
briandear
How frequently are developers being laid off? And why should companies just
keep people around if they aren’t adding value? The lack of that “social
contract” has resulted in much higher wages. Look at developer salaries in
Paris compared to New York. Since there is more job security in France, the
trade off is that “market rates” are dramatically lower.

~~~
scarface74
I’m not saying that the lack of a social contract for developers is bad. I’m
just saying it is a thing. But if you’re wondering why developers always want
to do the new and shiny even if the underlying product is boring, they are
probably doing resume driven development and looking at thier next
opportunity.

My salary has gone up by $45K - $50K in the last 4 years by changing jobs
three times. I’m not complaining.

But on the companys’ side, it was completely illogical not to pay me market
rates. They still had to pay my replacement market rates and they loss
institutional knowledge when I left.

Why is it that HR will approve a req for a new developer at market rates but
have strict limits on what they can pay current employees.

------
philippz
This is what we did at STOMT at day one. For those who are interested in how
to accomplish this with PHP / Laravel / Lumen:
[https://www.stomt.com/blog/shared-components-across-
multiple...](https://www.stomt.com/blog/shared-components-across-multiple-
laravel-lumen-micro-services/)

------
luord
Probably off-topic, even if it's about the article: they should've never
relied on doing requests for the tests, specially unit tests.

Isn't the recommended practice to treat third-party services and libraries
like a black box? That's what I do, and that's how I figured what their
solution was (roughly) before I read it. Felt a bit proud of myself.

------
gfodor
Before you go down the path of splitting your app up into microservices you
should grok erlang/BEAM/OTP. Lots of thinking went into its creation that
leads to highly reliable real time systems and at the very least some of the
ideas can be lifted in informing how to best design things. (But really, you
should probably just use it instead.)

~~~
qaq
Or just use Erlang/Elixir etc and you will have your microsrevices platform
without k8s and most of the pain :)

------
martin_drapeau
Lots of people judging/bashing the author (and team) that they made bad
choices jumpinng on the microservice bandwagon when it clearly wasn't the
correct thing to do.

Sure, but at the same time the team learned some valuable lessons, gained
experience and hopefully matured. I applaud Alexandra for opening up and
sharing that with us.

------
exabrial
The thing people failed to realize is that microservices are a way of
structuring people in teams in an organization, not a way of structuring a
product architecture.

To use words that are mine, microservices are a hack on Conway's law. One team
should be responsible for 2-3 microservices and should have a lot of autonomy.

------
dunk010
Perhaps they should have read this:
[https://programmingisterrible.com/post/162346490883/how-
do-y...](https://programmingisterrible.com/post/162346490883/how-do-you-cut-a-
monolith-in-half)

It would have saved them a lot of man-years of time.

------
devquixote
"We no longer had to deploy 140+ services for a change to one of the shared
libraries."

I would be curious as to what the focus and scope of these shared libraries
were such that they required frequent updates with cascading side-effects
requiring everything that leverages them to have to be updated.

~~~
f2prateek
Most of these libraries are open source. One we had the most friction updating
was was our library that wraps common logic for dealing with user generated
events -
[https://github.com/segmentio/facade](https://github.com/segmentio/facade)
(this is the example that was referenced in the blog post).

------
cestith
If you're using microservices why are you using shared libraries? Wouldn't it
make sense to break the common portion into a separate service and define an
API for it?

I swear more people writing microservices need to read about flow-based
programming and perhaps the actor model.

------
spion
Seems like an instance of "The wrong abstraction" ?

[https://www.sandimetz.com/blog/2016/1/20/the-wrong-
abstracti...](https://www.sandimetz.com/blog/2016/1/20/the-wrong-abstraction)

------
kennethh
The failure her does seems like someone who have made microservices with the
wrong bounded context (Domain Driven design). When one splits a monolith up
into microservices it is essential to not split a bounded context into several
micoservices (or 100+ in this case).

------
bsclifton
Microservices are a great tool for the toolbox. Not everybody needs them. It
has definitely seemed like a fad over the past few years to push everybody to
use them (even projects or orgs that don't need them). I suspect that is what
happened here

------
alexanderscott
This misses the mark completely. A separate repo and service simply for
different serialization and routing? Each performing the same basic function.
That’s not a legitimate use-case for microservice separation and was doomed
from the start.

------
abalone
_> Recall that the original motivation for separating each destination
codebase into its own repo was to isolate test failures._

This seems like a weird reason to adopt microservices. Can't you isolate tests
within the same repo using folders?

------
crimsonalucard
We should bring Categorical concepts into microservices. Maintain compose-
ability between entities rather than forming a graph of intercommunicating
objects similar to OOP.

I don't know if adhering to these principles will make microservices better.

------
aimatt
I imagine that your development velocity with this monolith is enabled by the
modularity of your code that was enforced as microservices. This benefit will
wane and if you had started with a monolith would be much worse.

------
kungfooguru
"With everything running in a monolith, if a bug is introduced in one
destination that causes the service to crash, the service will crash for all
destinations."

The real reason people should investigate Erlang/Elixir.

------
jskaggz
April fools trolling experiment: recast something like perl5/cgi as the new
backend web hotness with an edgy name and flashy tutorials.

But does perl5 even need hype? Soon it'll age into retro-chic status, ala
lisp. :)

------
egidijus
does anyone know if segment is profitable? revenue? number of clients? tech
team size? tech team maturity? it could be a business decision... where their
dev/infratructure salary costs do not translate to profits. the words
"developer productivity" are mentioned a lot, depends how segment quantifies
"productivity".

businesses/companies at early stages of growth make decisions that allow them
to be competitive and profitable, "the holy grail of application architecture"
might not apply yet or for the first 5 years, or ever.

------
tsenart
#MakeMonolithsGreatAgain

[https://twitter.com/tsenart/status/1017061272188280833](https://twitter.com/tsenart/status/1017061272188280833)

------
amorousf00p
I don't know. If this is really how software is written in large scale web
service environments you will always have problems. It just seems like sh*t to
me.

------
SerjEpatoff
>>Unless you’ve been living under a rock...

Monolithic Scylla DB built on top of venerable Seastar framework is very good
example of under-rock-living competitive advantages.

------
jpswade
How on earth did you end up with 100s of services?

I can't imagine you have applied Conway's law.

I think there's some serious confusion between FaaS and microservice
architecture.

------
dexterdog
"microservices is the architecture du jour"

? There is nothing new about microservices. They've been a hot topic for far
longer than Segment has been an idea.

------
jaequery
So they went from managing 100's of little parts to a just a single one and
are loving it. Sounds like a no-brainer to me.

------
ninjakeyboard
I've heard Jamie Alan talk about copying shared code into the services. I
think shared libraries are an antipattern.

------
softwarefounder
I once scoffed at microservices.

Then I was tasked with building a scalable platform around a message broker
system.

Then the switch clicked. I get it now.

------
verletx64
What's wrong with a circuit breaker?

------
rvr_
A 3-person team cannot manage 100+ repos with 100+ micro services. Sounds like
a big impedance mismatch.

------
johnlbevan2
It will be interesting to read your post in a few months time, when you hit
the issues caused by using a monolith which could be resolved by splitting
that service apart.

Introducing components to match on various forms of value seems odd. Ideally
standardise on those names; but if that's not possible (and you're using
microservices), why not have a service to correct the name, rather than a
library deployed to every endpoint? Then you call that service with data
containing `first_name` or `givenname` and it returns that message with the
standardised form. Or have a "global key" service; where you send the source
system name and value, and have that translated for the destination via a
lookup, allowing any field names or defined values to be translated by a
generic reusable component:

    
    
      Entity    | System Name | System Value | GlobalKey
      -------------------------------------------------------------------------------------------
      Boolean   | HR          | TRUE         | 52582622-4322-445b-bb7a-8ca118d0ca2b
      Boolean   | HR          | FALSE        | 688c2298-6b99-4c31-a356-1ab9b1caacde
      Boolean   | Finance     | 1            | 52582622-4322-445b-bb7a-8ca118d0ca2b
      Boolean   | Finance     | 0            | 688c2298-6b99-4c31-a356-1ab9b1caacde
      Boolean   | Sales       | Yes          | 52582622-4322-445b-bb7a-8ca118d0ca2b
      Boolean   | Sales       | No           | 688c2298-6b99-4c31-a356-1ab9b1caacde
      FieldName | HR          | GivenName    | 3de31cff-e4bb-4d4a-a819-fd96c7c5032e
      FieldName | HR          | Surname      | e799b891-1eeb-4598-85a5-a9534d3a3a4c
      FieldName | Finance     | First_Name   | 3de31cff-e4bb-4d4a-a819-fd96c7c5032e
      FieldName | Finance     | Last_Name    | e799b891-1eeb-4598-85a5-a9534d3a3a4c
      FieldName | Sales       | FirstName    | 3de31cff-e4bb-4d4a-a819-fd96c7c5032e
      FieldName | Sales       | Surname      | e799b891-1eeb-4598-85a5-a9534d3a3a4c
    

As for "handcrafted XML"... the alternative looks like handcrafted code... For
translation a language like XSLT which was designed for translating data from
one format to another seems like a good choice. You can also use this to get
around your translation issue by defining a central "universal" format; so you
can have XSLTs to translate messages from source systems to the universal
format, then other XSLTs to translate from universal to the destination; so
that adding or removing a system (/service) only impacts that system / you
don't need to rewrite every point-to-point interaction of that service with
another (OK, not point to point since we have microservices; but if they're
not adding value by abstracting you away from point to point then essentially
you've just got complex point-to-point interactions rather than simple point-
to-point).

------
andrew_
"Everything old is new again" the mantra of software development, fashion,
etc, etc.

------
aimatt
Someone better tell Google to merge all their code back into one repo, haha

------
atombender
People might read this article as an argument against microservices. It's not,
in my opinion. It's an argument against impractical design. Their system
should never have been 50 separate repos in the first place; in terms of
design, it's all one app that would benefit hugely from being a single
coordinated system.

Microservices work when they are separate, independent, single-concern systems
that coordinate using APIs. People often go overboard in splitting apps up
into small pieces, even when those pieces logically belong to a single system.
Start with figuring out the subsystems, then considering whether they are
worth splitting in the first place.

It's worth pointing out that microservices don't mean separate repos or even
codebase separation. What matters the most is encapsulation. Monoliths grow
horrible because they end up being balls of spaghetti, and forcing
modularization at the service level is a way to avoid such messes by reducing
the individual parts manageable sizes, allowing a part to be replaced without
being concerned about its tendrils having grown through the whole system.

For me, the biggest value of microservices is composition, of thinking abou
modules as off-the-shelf components that you use as parts to build something
bigger. Using a complete enough set of microservices, I can build a frontend
or client that has zero app-specific backend code. For example, if I have a
generic data layer (think Firebase), a user database layer with OAuth/OIDC,
and a way to store images, then I can build Instagram from scratch with no
backend development at all. That's very powerful.

But once I need some specialized, app-specific stuff ("business rules"), such
as rating of photos, commenting, moderation, etc., then those probably
wouldn't be microservices! The concerns are unified there for the most part,
and disentangling them would just lead to annoying fragmentation. A single
use-case specific service ("monolith") that would exist at the center of it
all.

On the other hand, composition is mostly useful if your pieces are going to be
reusable. If I intend to build more than one Instagram, or maybe a Facebook
(which also needs data storage, and logins, and photos, etc.), then the
individual pieces would be reusable and could just be shared between the apps.
But if I'm just building Instagram for 5 years and I'm not building a series
of apps for different use cases, reusability has zero importance, and I might
as well just move everything into a single monorepo and forget about making
anything general-purpose. (Each piece should be general-purpose _enough_ , but
they usually don't need to be so generic that you could open-source it for
everyone.)

I never liked the word "microservice", and I think we'd be better off if we
called them, say, modules or subsystems.

------
theptip
I can't help but think that this is an experience report covering
"microservices-done-wrong considered harmful". There are definitely pitfalls
to the microservice approach, but there are canonical answers to all of these
issues. That said, the right approach is the one that fits with your team's
disposition, skill-set, and experience level, so ditching microservices might
well have been the correct choice in this case.

> Eventually, all of them were using different versions of these shared
> libraries. We could’ve built tools to automate rolling out changes, but at
> this point, not only was developer productivity suffering but we began to
> encounter other issues with the microservice architecture.

In the extreme case where you have 150 microservices and 3 devs, I think that
spending a few days to build a tool that auto-updates your common deps and re-
runs your tests would be a good investment. Or you could pay someone else to
do this with a service like
[https://www.dependencies.io/](https://www.dependencies.io/). Handling common
code is one of the known pain points in microservices, so it's worth tackling
head-on. (Last I saw Netflix handles this by the rule "no services can share
code unless it's by an open source library", which encourages common code to
be thoughtfully packaged and released.)

> The additional problem is that each service had a distinct load pattern.
> Some services would handle a handful of events per day while others handled
> thousands of events per second. For destinations that handled a small number
> of events, an operator would have to manually scale the service up to meet
> demand whenever there was an unexpected spike in load.

I can see this being tricky to tune, and am hesitant to opine without knowing
the details, but if you can fix the problem by bundling all the services into
a single monolith (i.e. aggregating all load into N nodes), then you should
also be able to fix the problem by using a cluster scheduler like k8s with
equivalently sized nodes. As long as you don't have bursts that are an integer
factor of your baseline system load, both approaches should work equivalently
(to the first order). It sounds like they were running individual instance(s)
per microservice, which isn't a good fit for very bursty services.

And as a bonus, with a cluster scheduler you get a number of primitives to do
resource reservation, which you don't get for free if you're merging all of
your services back inside a single monolith. This means the problem of back-
pressure from a single misbehaving endpoint -- which was one of the reasons
they moved to microservices in the first place -- will probably come up in
some form down the road.

> Recall that the original motivation for separating each destination codebase
> into its own repo was to isolate test failures. However, it turned out this
> was a false advantage. Tests that made HTTP requests were still failing with
> some frequency. With destinations separated into their own repos, there was
> little motivation to clean up failing tests. This poor hygiene led to a
> constant source of frustrating technical debt.

I don't have anything to say here except... don't do this? If your tests are
failing you should be fixing your tests (or removing them if they aren't
adding value), not adding new integrations. Consistently-failing tests are a
big warning sign that your CI/CD process is not in good shape, and a healthy
CI/CD process is a strict precursor to doing microservices successfully.

> The outbound HTTP requests to destination endpoints during the test run was
> the primary cause of failing tests. Unrelated issues like expired
> credentials shouldn’t fail tests.

Your UTs shouldn't be hitting your external dependencies; the correct solution
here is the one that they eventually landed on, i.e. to either record/replay
real HTTP requests, or to mock out the HTTP responses manually. You still need
real integration and smoke tests in a production-like environment to make sure
that you've not missed a change in the remote API schema. IME this is one of
the biggest challenges of working with external APIs, and I don't envy the
task of maintaining 150 integrations, however this issue seems unrelated to
microservices.

------
LukeB42
Seems oversimplified to me. Not one mention of authentication and single-sign-
on in the article.

------
wedgeantilles
Everything is cyclical.

------
dotdi
Another article with the TL;DR similar to "we jumped blindly on the bandwagon,
did obvious mistakes along the way, and microservices are the root of all
evil".

Thanks, but no thanks.

------
_Codemonkeyism
"[...] the small team found themselves mired in exploding complexity."

Key sentence. Using the wrong tool for a small team, blaming the tool.

------
primeblue
Why not use a package manager to publish a library for a destination?

Then just release a package and reference it whenever there is a change to the
target.

Should work well in either a monolith or micro service.

------
lafar6502
kudos to people paying for all this mumbo-jumbo

