
Why Segment Went Back to a Monolith - BerislavLopac
https://www.infoq.com/news/2020/04/microservices-back-again/
======
nickcw
I think that the problem here was that they were fighting against Conway's
Law:
[https://en.wikipedia.org/wiki/Conway%27s_law](https://en.wikipedia.org/wiki/Conway%27s_law)

> Any organization that designs a system (defined broadly) will produce a
> design whose structure is a copy of the organization's communication
> structure.

I think microservices work well in organizations that are big enough to have a
team per microservice. However if you've just split your monolith up and have
the same team managing lots of microservices you've made a lot more work for
the team without the organisational decoupling which are the real win of
microservices.

In my experience it is really difficult to fight Conway's law, you have to
work with it and arrange your business accordingly.

~~~
strictfp
Noo! Building teams around software components cements your architecture and
prevents most cross-cutting improvements.

I'll claim that splitting a well-structured monolith into microservices will
always make it less maintanable, but it might be worth it if you need to for
some reason like elasticity or failure tolerance.

But for the love of god, keep the design open. Don't tie the existence of
internal software components to peoples livelihoods.

~~~
yowlingcat
What is your alternative? Tying "the existence of internal software components
to people's livelihoods" across the expanse of the entire codebase is the only
remotely effective approach I've seen to scaling the SDLC at scale.

~~~
cturner
"What is your alternative?"

Aggressively small teams, with no hands-off middle-management layer.

You can build massive capability around a small number of well-managed
message-backbones and a single codebase. By keeping the number of hands small
and the structure flat, you force high standards. (Skilled staff won't
tolerate distractions caused by bad engineering or inadequate automation.)

Heuristic for analysing firms: who has strategic power in decision-making?
Conventional answer: a group of hands-off middle-managers who run on meeting
tempo, and who are valued by how many people and systems report into them.
Under AST: an engineering effort running on maker tempo in cooperation with a
hands-on sales effort.

Microservices tend to have multilateral contracts with other systems in the
organisation. This steers all planning towards meetings. This creates middle-
management bloat.

~~~
herval
Is there any example where this works (articles, presentations, etc)? In
particular, anywhere with more than a couple dozen developers?

~~~
dodobirdlord
Amazon has a famous love for what they call “two-pizza teams” and you can find
writeups about the philosophy by searching the term. The joke is that a team
should be small enough that you only need to order two pizzas to feed them
all. The philosophy is about the number of participants in the decision-making
process. Keep teams small and give them total ownership of decision making so
that decisions can be made by a small group of people who work with each other
every day. That way no meetings (and certainly no cross-team meeting) need to
happen for most decisions to be made.

~~~
herval
Amazon is very well known for having A LOT of middle managers too, so I'm not
sure it's a good example?

~~~
dodobirdlord
Seems sorta reasonable that if you need a manager for every 6-8 engineers, you
would end up with a lot of managers.

~~~
herval
OP’s post was “ Aggressively small teams, with no hands-off middle-management
layer”. 6-8 swe teams + hands off people manager reporting to middle manager,
who reports to director, is how Amazon organizes teams, therefore it isn’t an
example of what their suggestion was...

------
dpix
I see a lot of places that seem to either think that:

1\. Microservices will let them ship things faster or

2\. It's microservices everywhere or nothing

Microservices might let you ship faster if you are really good at deciding
where to draw the lines between services and really good at managing multiple
deployment pipelines and all the infra - that's a pretty tough ask.

Also, if you have a monolith it's perfectly fine to pull out one or two parts
that need to scale much more efficiently and leave most of your codebase in
the monolith, but a lot of times I see companies think once you have created
one microservice the monolith is now the worst thing possible and it needs to
be broken up entirely.

My general rules for this are to always start in a monolith and break things
out as they start to fail or break other parts of the codebase, and don't go
all in just because you now have one microservice that works well by itself

~~~
eweise
Starting with a monolith could lead to really difficult refactorings unless
you structure the code in a way that it can be easily decoupled.

~~~
JamesBarney
Monolith -> microservices : difficult refactoring

Microservices -> monolith : difficult refactoring

Microservices with poorly chosen context boundaries -> microservices with well
chosen context boundaries: very difficult refactoring.

~~~
eweise
"Monolith -> microservices : difficult refactoring" I guess my point is that
it doesn't have to be complicated if you architect the monolith carefully.
That usually doesn't happen though because frameworks don't necessarily
promote the practice and projects are short sighted.

~~~
JamesBarney
It's also really hard. Trying to determine how to split up any code base into
logical divisions such that you when adding the next 5 years of functionality
you'll have the fewest number of cross division processes is hard.

This is why Martin Fowler recommends starting with a monolith and refactoring
into microservices unless you have extensive experience building out very
similar applications in the same domain.

------
malisper
If you take a look at some of Segment's open source code, it isn't hard to see
why they wound up struggling with microservices. It looks like they subscribe
to the "left-pad" style of software development. They have tons of
repositories that have less than 10 lines of code. They have a two line
repository for calling _preventDefault_ [0], a four line repository for
getting the url of a page[1], and a eight line repository for clearing the
browser state that calls into eight different packages[2].

Disclaimer: I run a Segment competitor. I'm pretty biased, but still...

[0] [https://github.com/segmentio/prevent-
default/blob/master/lib...](https://github.com/segmentio/prevent-
default/blob/master/lib/index.js)

[1]
[https://github.com/segmentio/canonical/blob/master/lib/index...](https://github.com/segmentio/canonical/blob/master/lib/index.js)

[2] [https://github.com/segmentio/clear-
env/blob/master/lib/index...](https://github.com/segmentio/clear-
env/blob/master/lib/index.js)

~~~
didip
What does segment.io do and what does your company do?

~~~
malisper
Sure. I'm the founder of freshpaint.io.

The premise of segment.io is that there are lots of tools that take user
behavior data from your site and it's a lot of work to integrate them all. For
example, when a user signs up, you may tell multiple different tools that a
user signed up:

    
    
      - You tell Mixpanel so you can create graphs of how many people signed up.
      - You tell Google Ads so Google knows a specific ad just resulted in a conversion.
      - You tell Optimizely so it knows a specific page from an A/B test just converted.
    

Before Segment, you would need to write code for each tool separately. This
doesn't sound so bad, but it becomes a pain when you have dozens of different
tools and dozens of different events you want to track. With Segment, you only
need to tell Segment that someone logged in. Segment will then send that event
to all your other tools. You can think of it as like a multiplexer for user
behavior data. Instead of integrating 10 tools, you just integrate Segment.

The challenge with Segment is you need to write custom code for every action
you want to send into Segment. This is bad for two reasons. Usually the end
user of Mixpanel/Google Ads/Optimizely is a non-technical person that doesn't
know how to write code. What they have to do is file a Jira ticket for an
engineer to add a new bit of tracking to the website. Depending on the size of
the organization, that person can end up waiting two weeks or more in order to
start tracking a new bit of data from the website.

The other challenge is people often don't know what to track ahead of time or
forget to track something important. For example, if you launched a new
feature two weeks ago and forgot to setup tracking on it, there's no way to
get that data back.

Freshpaint solves these problems by automatically collecting every user action
upfront. Anytime someone clicks a button on your site, that fires an event in
Freshpaint that someone clicked that button. You can then use Freshpaint's
point and click UI to say that whenever someone clicks that button that is a
"login" event. Then you can send that event into different tools. This is
great because the point and click UI allows a non-technical user to send data
into different tools and because we track everything up front, even if you
forgot to track something, Freshpaint will still have recorded every instance
of that action. That way, even if you decided you want to start tracking some
action today, you can use our "time travel" functionality and recover every
instance of that action since you installed Freshpaint.

~~~
oblio
This is both interesting and horrifying when I remember how much we are being
tracked.

------
jillesvangurp
This is a discussion on pretty much every team I've been on for the last 5 or
so years. I agree mostly this stuff is done for the wrong reasons.

IMHO it doesn't matter if you replace microservices by components, corba
objects, rpc objects, soap services, etc. It all boils down to chopping your
software into smaller bits that than immediately start having a need for
sending messages between them, finding each other, defending their boundaries,
etc.

So, the first mistake would be assuming this is a new problem to think about.
It's not. You can find similar debates about how to chop up software ever
since people moved beyond just having their code ship in punch card form.

The right discussion to have would be first deciding whether you want to break
down by your logical architecture so that your deployment architecture
reflects that or your organization diagram (aka. Conway's law). Then the next
step is deciding whether your primary goal is network isolation of unrelated
chunks of code or enabling asynchronous development of these chunks of code
(if so, there are other solutions). Usually it boils down to, again, Conway's
law: different teams just don't want their stuff to depend on shit happening
in another team because of internal bureaucracy and hassle.

Now say you have a valid business reason or technical reason for actually
wanting to have different stuff be isolated (e.g. for scaling reasons or
security reasons). The next step is deciding whether this means you also want
to break up your code base. Monorepos and microservices are a thing. Look at
e.g. lerna for node.js, or multi module gradle projects on the jvm. In Go this
is well supported as well. If you're really sure that you don't want micro
services because of Conway's law there are lots of valid reasons for having a
well structured mono repo with a bit of reuse of shared functionality, a
simplified review process and more visibility in what is happening.

IMHO people do this for completely the wrong reasons; like wanting to try out
some new language, organizational issues, etc. that ultimately result in
fragmented code bases, lots of devops overhead and complexity (it's never
simple or cheap), lots of project management overhead, etc. You pay a price.

------
kkapelon
>Shared libraries were created to provide behavior that was similar for all
workers. However, this created a new bottleneck, where changes to the shared
code could require a week of developer effort, mostly due to testing
constraints.

That is a big red flag. Microservices that suffer from shared code changes are
not really microservices, but a distributed monolith instead.

~~~
pjc50
This is really a time-of-binding argument; the difference between a "library"
and a "service" is that one is in-process and accessed over function calls,
and the other is out-process and accessed over RPC.

If you change code that other services are using, you can break those other
services. No way round that.

~~~
Autowired
While that is true, a microservices architecture can (and in my opinion
should) rely on messaging and account for message schema evolution.
Dependencies between services should be way less coupling than dependencies
between an application and a library.

~~~
caust1c
Schema evolution is just as big of a dependency hell as managing direct
library dependencies. With a monolithic architecture, a lot of those concerns
are contained within the context of a single repo, and can be tested much more
easily than with many repos.

------
DrScientist
I'm struggling to understand the problem with shared code and the desire to
fragment the code repo!

Why can't you have both independently deployed microservices and a shared code
base?

If the deployment lifecycle is different for each microservices and each
_deployment_ is self-contained, then they can be deployed with different
versions of the code - even if they use the same source tree and share code.

Obviously the shared code needs to be properly maintained and evolved, but it
seems to me a lot of the software engineering problems occur when people move
away from source code dependencies - with great tooling - versioning, diffs,
debuggers - to other types of dependencies ( shared libs etc ) where the tools
are non-existent or very simple.

Now granted if you needed to fix a critical bug in the shared code - that
would require a redeploy of everything, but that happens much less frequently
than the need to deploy a single service with immunity as long as your keep
your microservice contract. It also means the discipline of making sure every
services is deploy-able at anytime is kept to.

And if you didn't share code - you probably wouldn't be fixing a single bug
once, you'd have much more code, with many more bugs.

~~~
gowld
> Why can't you have both independently deployed microservices and a shared
> code base?

This is what everyone does, so I can't even comprehend what Segment was doing.
Maybe they were deploying a fleet of microservices inside a monolithic
deployment? If so, there's no wonder it failed.

~~~
zzbzq
We do separate code repos, my last place did separate repos, place before that
did monolith(s) but still did separate repos for anything not in the same
monolith. I'm pretty sure it's more common to do separate repos, rather than
mono-repo, for separate services.

Seems to me, though, the problem is people trying so hard to reuse code.
That's the main problem cited in the article. People get really gung-ho about
reusing code and creating shared libraries, but reusing code is actually bad
most of the time. You should strive to only depend on things that you can
reasonably expect to not change, and that you don't need to update even if a
new version comes out. What you're supposed to do is take that code in the
shared library, and make it a microservice, and obey the usual
backward/forward compatibility rules.

Using a monolith hides that problem because the code remains easy to update
and build, but just as fragile and in need of heavy testing whenever you
change code modules that have multiple consumers. That goes against the idea
of mono-repos as well.

~~~
kelnos
> _People get really gung-ho about reusing code and creating shared libraries,
> but reusing code is actually bad most of the time._

Disagree here, in general. I'm not in the ruby hyper-DRY camp, but copypasta
is not the solution to dependency management problems.

Creating shared libraries does require discipline; you should do your best to
just avoid breaking changes ever, and on the rare occasion you must, you need
heavy communication and testing to ensure consumers find out about the change.
And you can only change the API of the library; you can never incompatibly
change how the library interacts with other services. I get that this is hard,
but it's worthwhile if you can do it right.

We have thousands (maybe even tens of thousands) of lines of share library
code at this point. Some of it is probably not necessary, but most of it we'd
be completely lost without. Reimplementing core logic and utility classes and
auth code over and over again is a great way to burn out your developers and
create bugs. And these bugs are even worse than your garden-variety bugs,
because you have to track them down and fix them over and over, and each fix
is slightly different because each reimplementation is slightly different.

I agree that sometimes sharing code is a bad idea, but asserting it's bad
"most of the time" is completely antithetical to my experience.

------
ajsharp
I see quite a few people defending microservices; the org is the problem, they
must not have written the software correctly, etc. Most org structures are not
great. Most software is not great. If you expect the exception to be the rule
you're setting yourself up for a career full of disappointment.

Microservices are a modern re-branding of service-oriented architecture, but
'microservices' sounds cuter and less like it belongs in Java-land, and
there's some theoretical idea that splitting your app into even smaller pieces
will somehow make the whole thing better.

SOA/microservices solves a few problems and introduces a great many. The
original SOA proponents were pretty explicit about this. Beware! There be
dragons here! But one of the main pieces of prescriptive advice from domain
driven design is helpful for splitting into distributed services: split along
domain lines with minimum inter-service dependencies. Payments is an obvious
one. Microservices seems to buck much of this advice in favor for a blissfully
ignorant principle of "small" or "isolated". Good luck isolating something
that is not meant to be isolated.

Scaling software is hard. Scaling teams is harder. Trying to scale teams by
scaling/distributing software is an understandable goal but extremely hard to
pull off because of additional complexities and costs you incur in doing so.
Dev gets harder, deployment/ops becomes harder, testing becomes harder. Cross-
team communication, documentation, API publishing and adherence goes from
being very low impact within an org to suddenly being critically important.

To do SOA/microservices effectively you need complete organizational buy-in,
and you have to commit completely to developing tooling and solving all the
associated problems in moving to a services approach. Often, it's easier to
just put it all back together, organize the code in such a way to minimize
merge conflicts and wait for the ungodly slow test suite to run in CI. There
are good reasons you rarely hear SOA/microservices success stories outside of
enormous companies (Netflix, Facebook, Google, Amazing, etc). Doing this stuff
takes an enormous investment and commitment from the entire organization, and
there are just lower friction ways to skin this cat if you don't operate at
mega web scale.

Growing a monolith is hard. Growing a microservices/SOA architecture is also
very hard. Growing is hard.

~~~
DrScientist
> split along domain lines with minimum inter-service dependencies.

Exactly, and done right that quite often means big 'microservices'.

All too often I see the 'functional programming disease' where the aim is to
deconstruct to the smallest possible reusable functions ( 'micro' services
right? ), often prematurely, creating high levels of compositional complexity
and with zero tools to help you understand how the actual 'app' \- say payment
system works if it's distributed across 20 services.

Yep each single microservice is simple - but the payment system might not be
and that's what you need to understand - better if your payment system is one
thing - with maybe one or two things separated out if you need to scale that
part.

~~~
capableweb
"Yep each single microservice is simple - ..." but the whole is not.

I always find it more interesting what's _not_ in the single microservices,
the stuff you do see. When you make a diagram with boxes and arrows, the
interesting stuff would be the arrows, not the boxes themselves.

~~~
crdrost
Indeed my loudest prescription to people doing service-oriented architectures
of any kind is to simplify these arrows.

The common mistakes that I see are for two services to share read access to a
common database, or to discover each other and send RPCs to each other. Both
really dangerous for exactly this reason! The common database obscures how the
two communicate with each other, and invariably everything connected to a
database becomes one service -- call it a "mini-lith" if the overlapping sets
created by databases do not cover the whole architecture. The problem is the
preponderance of implicit arrows; when I reason about what it means to make
this datetime nullable so that I can store such-and-so, I need to consider
whether everybody who can read that datetime will be prepared for its
nullability.

RPCs and APIs are the same way. I add a contract about what I am outputting
and then everybody needs to know about my contracts and I must commit to them
or else modify all of my consumers. So because the arrows are bi-directional
everything just becomes one monolith again.

Instead, I recommend message brokers -- all that pubsub stuff. A given service
tells all the other services simultaneously "this happened," and it is their
responsibility in their codebase to listen for that event and then say "okay,
then this must happen." Publishing a new version of the event is done by just
emitting both the old and the new version of the event and perhaps having a
shared standard for deprecation across the codebase so that you get
deprecation warnings in your prod logs.

Every service has its own database and they generally only communicate to each
other through these broadcasts, makes the arrows into the "stuff you do see".

------
lifeisstillgood
I have this thing about micro-services/complexity in that it follows Conway's
Law - the architecture follows the organisational structure.

If you push authority and decision making and responsibility for a _service_
to a (2 pizza) team then guess what, microservices work really well.

If you have vast monolithic centralised production operations teams, and no
way in hell is their C-Exec going to assign two of them to look after the
user-login service, you might not do so well.

Like most things, the organisation needs to change to get the best out of the
opportunities software offers. Those that don't will face increasing friction
and eventually die off.

~~~
gilbetron
Conway's Laws isn't a law, it's just an interesting thought experiment.
Organization and architecture bidirectionally effect each other, but not
directly, and not completely. I hate how current discourse invokes these
different "Laws" as if they are physical properties of the universe. I've
worked at places with a strong, hierarchical organization that created a
wonderful set of "micro" services, and I've worked at places with a chaotic
environment that developed monoliths.

There are shitty hierarchies and shitty flat organizations, just like there
are shitty monoliths and shitty microservices.

Sorry if you actually agree with this more nuanced view, it's just that I've
seen Conway's "Law" invoked more than once in this discussion and it drives me
bonkers. I get the same way when someone ("Medium Developers" I call them,
more than green but less than seasoned who swallow everything the read on
Medium as gospel and run around quoting it zealously) quoted liskov
substitution principle at me as if it was one of Newton's Laws.

~~~
musingsole
Conway's Law is a physical law in the same sense as Murphy's Law.

It's also obviously true. The organization builds the architecture. The
architecture either helps or hinders the organization. The organization builds
a new architecture. There's no indirect connection here. If you've seen
hierarchical organizations implement microservices, it's because that
organization's complement was a microservices architecture. And likewise for a
chaotic organization.

\--well, sidetrack: Aren't strongly hierarchical organizations the best suited
for microservices? With all the strongly divided responsibilities and whatnot?

~~~
87zuhjkas
> Conway's Law is a physical law in the same sense as Murphy's Law. It's also
> obviously true.

It's like a tautology: "In logic, a tautology is a formula or assertion that
is true in every possible interpretation."

------
gfodor
My takeaway from these kinds of stories is that microservices make sense if
it's no longer possible to operate a monolith. By existence proof, that was
clearly never the case at Segment. The common fallacy seems to be that
microservices lead to better software via better architecture, regardless of
human factors like team size. My sense is that it's the opposite:
microservices are a necessary evil to scale teams past a certain size due to
the bottlenecks that emerge with monoliths as more people begin trying to make
changes simultaneously, and should be viewed as neutral at best in terms of a
software architecture pattern to increase reliability, performance, etc. In
practice, it seems wise to keep your engineering team as small as possible for
many reasons, one large one of which is that past a certain point you will be
forced to move to microservices. All other things being equal, that's a move
you don't want to ever have to make.

If you have hundreds of engineers then certainly microservice architecture
starts to make sense, since even the idea of transactional deploys of the
monolith break down due to queuing at that scale. But jeeze, don't pull that
trigger until you actually find yourself backing up on necessary complexity
like deploy queues, PRs stuck due to inability to maintain the branch given
the velocity of master, etc. Don't let Conways law lead you prematurely to
microservices. If I'm ever in a position where I am feeling real pain that
leads to an urgency for microservices, I am probably going to first ask the
question if I can just fire some people to make the problem go away. The risk
of the transition to microservices is just that high.

It's the same rule of thumb with other things like hiring, feature roadmaps,
etc: YAGNI. If you are hiring someone before the pain is so high the work
cannot be done otherwise, building features before you have people explicitly
showing the need for them, or making deep, cross cutting architectural changes
that impact everyone before they are strictly necessary due to concrete
problems with shipping software, you're probably choosing the wrong use of
opportunity cost, capital, etc.

------
ChrisMarshallNY
This sounds like the old arguments about OOP.

Turning everything into an object can make a small program into a big program,
so it’s maybe not such a good idea for small-scale stuff.

[http://www.solipsys.co.uk/new/TheParableOfTheToaster.html](http://www.solipsys.co.uk/new/TheParableOfTheToaster.html)

However, in my experience, OOP made it possible to do really big stuff.

It’s all about not having a “one-size-fits-all” approach. I don’t think it’s
just about scaling architectures; it’s about changing architectures to match
scale.

It’s difficult as hell to make these changes, because people get invested in
methodology, and insist on applying the same lens to everything we do.

It sounds like they had the right idea, but they probably had the wrong
people.

~~~
FpUser
"Turning everything into an object can make a small program into a big
program, so it’s maybe not such a good idea for small-scale stuff."

In my experience OOP actually makes programs smaller. Assuming of course they
have good programmers/architects and the program itself is larger than "Hello
world".

~~~
ChrisMarshallNY
Don't get me wrong. I love OOP, and have been using it since before it was
cool. It's been a standard wrench in my toolbox for decades.

In fact, I have been running into folks, these days, that don't understand it,
as, apparently, OOP is becoming "uncool."

I've always been a "right tool for the right job" kind of guy. I started off
with ML (Machine _Language_ , not Machine _Learning_ ). I am quite
comfortable, sitting down with a breadboard, and flashing an OS.

But I remember the old days of OOP, where "classic" structured programmers
didn't "get" OOP, and designed these horrific chimeras.

I always make it a point to understand my methodology and drivers "to the
bone." Just because someone at a conference said it, doesn't mean that I
should use it for everything.

~~~
jjgreen
Please write a blog post called "Horrific OOP chimeras" and post a link on HN
...

~~~
ChrisMarshallNY
Oh...the stories I could tell...

But I have made it a point of personal ethos not to post criticism or
polemics, denigrating/excoriating the work of others.

I know that could buy me a lot of clicks (and probably some considerable HN
Above The Fold time), but I think we have enough negativity and finger-
pointing on the Internet.

If you read my stuff, you won't see much of that. I may, in a rather vague
way, allude to something that gives me a frowny-face, but I don't want that to
be part of my "personal brand," so to speak.

I do take tremendous personal pride in my work; both coding and writing, and
hold myself to a high bar. I may even project that bar onto others (only in
some circumstances), but I don't think it's helpful to do so in public.

I find it most gratifying to write a "This is how _I_ do this..." post, as
opposed to a "This isn't how _you_ should do it..." post.

------
mark_l_watson
I have always viewed using many micro services as something that adds
complexity, something to be used when necessary.

I started working remotely as a consultant in the early 2000s when my wife and
I moved to a remote area. I had several development jobs that used the same
monolith pattern: I would embed everything in a web app using Apache Tomcat,
taking advantage of work threads for background tasks. The only external
services were a database and crontab settings to frequently snapshot
databases. This pattern was so easy to code to, so easy to debug and deal with
any runtime problems. One customer reported that a system ran without stopping
for six years (ouch, no OS upgrades??) until they restarted it on a larger
server.

Micro services can be great, but not always the best choice when horizontal
scaling is not required.

------
hedora
Are there any case studies where microservices went well?

From an end user perspective, Netflix runs in “constantly degraded” mod.

From an engineering perspective, they track “number of successful stream
starts”, instead of percentage of the time 100% of their services are working.
That’s a huge red flag.

As a researcher, the monitoring and fault-propagation / modeling work they’ve
done to get it to stay up at all is impressive, but it’s not clear all of that
tooling would be necessary if they didn’t have to reason about N^2 fault
tolerance scenarios, where N = 100’s of microservices. That’s on the order of
one fault tolerance scenario for each atom in the universe.

~~~
beastcoast
Amazon does micro services (or SOA) extremely well. In fact they practically
invented the concept. It’s intricately linked with the 2 pizza team and
service ownership concept (you build it, you support it)

~~~
corpMaverick
and the concept of well defined API's for each service.

------
TeeWEE
It seems like they introduce microservices for the wrong reason. Instead of
having a service per team, the focussed on services to solve a technical
problem:

"Having a code repository for each service was manageable for a handful of
destination workers"

Microservices should be introduced to make teams go faster, not to decouple
external api endpoints....

~~~
Cthulhu_
I mean you are right of course, but at the same time I can't knock the
superficial idea of having one codebase for one domain-specific application.
Applications / codebases like that are usually not the problem; it's
integrating them into the larger whole where things start getting fucky.

------
MatekCopatek
I always felt like the biggest benefit of microservices (for the average
company that just jumped on the band wagon) was simply the fact it forced them
to break things up. Yes, they could achieve the same result with none of the
overhead on a monolith, but it would take... discipline. It's much easier to
just enforce a hard external constraint.

Realizing this and circling back is still a useful life lesson.

~~~
guywhocodes
I think this, approaching DDD, is the most common reason engineers push for it
these days.

~~~
pknopf
It isn't worth the added friction though.

And if you happen to leak concerns in your services (in a monolith), it's
really easy to adjust, as opposed to having to coordinate the deployment of 5+
services.

And even then, a distributed monolith is still a risk.

Micro-services add cement to your project. Be prepared to keep boundaries you
write for a long time.

------
thedevopsguy
Without knowing more about their architecture it is difficult to comment
beyond the conclusion Alexandra Noonan came to, stated at the beginning of the
article. It looks like to me that the architectural assumptions were changing
too quickly due to the demands of a fast growing business. Having all their
code in a single repository means that they can control dependencies,
versioning and deployment centrally, it gives them central control of their
software development lifecycle. I can't see how they could not have had the
same benefits of the monolith if their microservices existed in a single repo
to begin with and had the appropriate tooling to enforce testing, versioning,
deployment across all services in the repo. I guess this is the whole monorepo
debate and tooling.

This article for me is more about the complexity of managing a large team
across different sites where the architecture needs to change rapidly when
modularity is absent. They did get a measurable benefit around performance,
though. I wonder if Alexandra will comment on the challenges of running a team
in an environment of this complexity?

~~~
pjc50
If the microservices are in a single repo and tested and deployed together
then they are arguably no longer microservices but a "distributed monolith"!

~~~
thedevopsguy
I'm referring to having the same testing, deployment,packaging,versioning
policies etc being consistently applied across projects within the same
repository not deploying, testing and releasing together.

It's the drift and inconsistencies across these concerns across projects that
makes deployment and operations less predictable.

------
davedx
My experience of moving to a microservice architecture: the most important
consideration with microservices is "who will develop, maintain and operate
them?". You can split them down functional lines, architectural lines,
whatever you like, but if you don't have teams with definite ownership of each
microservice (and that aren't swamped by maintaining lots of them like it
seems happened with Segment), it will become impossible. The "operational
complexity tax" is a real thing but is manageable if your engineer:service
ratio is sensible and considered.

------
silvestrov
I see it like organization of large companies: you have to split it into
divisions, but you can't make every single person their own division.

Don't make the divisions too large, don't make them too small.

The art is to make them the proper size for the particular company.

If you have a small company you don't need divisions. If you have a large one,
you need to make divisions as you no longer can speak to every single
employee.

------
gridlockd
Microservice-Architecture is one of these trends where the value is unproven,
the upfront costs are high and the unknowns are unknown.

There's also a clear conflict of interest with SAAS and Cloud providers
benefiting from the perception that microservices are the way to go.

Under these circumstances, letting _someone else_ figure out all the issues is
the wise thing to do. Thanks to the authors for doing just that.

------
sealthedeal
They went to 50+ services in a few months, were applying the same policies
across all services. It sounds like they didn't plan well enough and jumped
straight into it, without any good DevOps or infrastructure mindset. It was a
disaster waiting to happen. This shouldn't be an article that people read and
say "Oh Im never using Microservices". This should be an article people read
and say WOW that is exactly the right way to NOT break apart a monolith.

------
karmakaze
What's notably absent are descriptions of problems with versioning interfaces,
failures from network unreliability, or problems managing connecting
infrastructure, or poor delineation of service boundaries leading to undesired
change dependencies--which is to say it seems like they executed well. There's
no sign they fell into common pitfalls.

There are definitely some good insights here that I don't often read about.
The idea that with a sufficient number of microservices (say 50+) you not only
treat your instances as-cattle-not-pets you have to treat the service types
en-masse as-cattle-not-pets. This requires more automation and organized
management as pointed out by the need for tuned autoscaling rules. This
requires continued investment into automating things you would do manually if
you had 50- services.

The other thing to consider is that going to microservices and back to a
monolith is not necessarily a failure. Microservices are good for periods of
high change velocity, once a platform is mostly built requiring much less new
development consolidation completely makes sense. At all points, we're solving
for impedance mismatch, whether that's the org structure, velocity of changes,
or numbers of developers vs numbers of deployed units.

------
fouc
How many places have gone from monolith to microservices and back to monolith?
I'm sure there's been quite a few.

~~~
pjmlp
I bet plenty of them.

A team that fails to understand how to write modular code, is just going to
write spaghetti RPC calls, while having to deal with all the traditional
failures and performance issues of distributed computing.

Naturally it is a recipe doomed to fail in the large majority of cases, but it
doesn't matter because whoever drove the change is no longer at the company
and a new consulting team/new hire gets the money to drive everything back to
the monolith.

So goes the money around on plenty of consulting gigs.

~~~
tartrate
> A team that fails to understand how to write modular code, is just going to
> write spaghetti RPC calls

This is interesting. I always assumed we were talking about good developers
here.

I wonder what's a more likely cause for a failed attempt at microservices. Is
it developer incompetence and lack of discipline, or is it environmental
factors related to the product and the organization?

~~~
pjc50
For almost all works produced by more than one developer the "good developer /
bad developer" dichotomy is just useless social darwinism. Talking about the
team, organisation, incentives, or business is far more useful.

(My favourite example is John Romero, part of the very small team that
produced Doom - but who also produced Daikatana, which keeps showing up on
lists of notoriously bad games.)

~~~
DonHopkins
That was pure hubris, foreshadowing GamerGate (and inventing the self-Pwn)! As
you say, it's all about the team, not the technique. And talking about that
particular team:

[https://en.wikipedia.org/wiki/Daikatana](https://en.wikipedia.org/wiki/Daikatana)

>One advert for the game became notorious; a 1997 poster containing the phrase
"John Romero's About To Make You His Bitch[. Suck It Down.]". According to
Mike Wilson, the advert was created by the same artist who designed the game's
box art under order of their chosen advertising agency. Originally, both he
and Romero thought it was funny and approved it. Romero had second thoughts
soon after but was persuaded by Wilson to let it pass. Speaking ten years
later, Romero said while wary of the slogan at the time, he went along with it
as he had a reputation for similar crass phrases. In the same interview, he
noted that reactions to the poster tarnished the game's image long before
release, and continued to impact his public image and career. In a 2008 blog
post concerning the recent activities of Wilson, Romero attributed him for the
marketing tactic. This prompted a hostile exchange of public messages between
the two at the time.

At least he apologized, though:

[https://www.youtube.com/watch?v=BF_sahvR4mw](https://www.youtube.com/watch?v=BF_sahvR4mw)

John Romero Apologizes for Trying to Make You His Bitch:

[https://v1.escapistmagazine.com/news/view/100748-John-
Romero...](https://v1.escapistmagazine.com/news/view/100748-John-Romero-
Apologizes-for-Trying-to-Make-You-His-Bitch)

>I'm going to quote our very own Shamus Young here for a moment: For almost a
decade, Ion Storm's Daikatana has been the example of "industry waste,
arrogance, and incompetence, as well as a universal punchline for things that
suck." The shooter was supposed to be an epic vision, the masterpiece of John
Romero - the mastermind behind genre-defining Doom and Quake.

>Then it came out in May of 2000, and it sucked. The arrogance and hubris that
crippled Daikatana have been well chronicled over the years, but none of it is
quite as infamous as the ad you see here to the right: "John Romero's About to
Make You His Bitch. Suck it Down." It was a pretty ballsy statement in itself,
but after the game's failure simply became laughable.

John Romero Is So Sorry About Trying To Make You His Bitch:

[https://kotaku.com/john-romero-is-so-sorry-about-trying-
to-m...](https://kotaku.com/john-romero-is-so-sorry-about-trying-to-make-you-
his-bi-5541406)

>Game designer John Romero and John Romero's hair ruled the roost during the
1990s. With titles like Doom and Quake, he not only helped popularize the
first-person shooter, he defined it. Then the unthinkable happened. He made
Daikatana.

>[...] Romero, who now says he is resigned to the ad, dished on the ad back in
2008, which evoked a saucy response from the marketer that spearheaded the
suck-it-down campaign.

Romero Dishes on the Ad:

[https://web.archive.org/web/20081225071219/http://kotaku.com...](https://web.archive.org/web/20081225071219/http://kotaku.com/345386/john-
romero-dishes-on-bitch-ad)

>[...] these are the kinds of jackass stunts he pulled [...]

Suck-It-Down Campaign Marketer's Saucy Response:

[https://web.archive.org/web/20081225070532/http://kotaku.com...](https://web.archive.org/web/20081225070532/http://kotaku.com/346816/gamecock-
head-tears-into-john-romero-its-getting-ugly)

>[...] and ill advised breast implants strewn across this fair nation [...]

------
BerislavLopac
As it is so often the case, the choice between monolith and microservices is
not a binary one; rather, it is a sliding scale between two extremes.

On one end we have a real monolith: a single executable binary, with no
external dependencies apart from the OS bindings. This is very rare in
practice, and most commonly found in games and probably mobile apps; when it
comes to Web-based services, even a traditional idea of single-codebase app
usually has a SQL database as an external dependency.

On the other end of the scale there is a complex system consisting of hundreds
or even thousands [0] of tiny services that require complex orchestration
mechanisms such as a service hub or service mesh.

So each team (in a wide sense: could be a company, organisation, department
etc) needs to consider where they fall in the continuum, considering a) which
architecture will provide most benefit while b) still being maintainable by
the team; both the architecture and the team need to evolve together.

[0]
[https://qconlondon.com/london2020/presentation/monzo](https://qconlondon.com/london2020/presentation/monzo)

------
scott113341
Previous discussion from her blog post on the same topic:
[https://news.ycombinator.com/item?id=17499137](https://news.ycombinator.com/item?id=17499137)

------
me551ah
I'm just curious if there is a middle ground somewhere?

On one end you have a giant monolith. All services rolled into one which
includes your API, Middle ware and then Database.

One the other end you have microservices which bundle services into individual
distinct units with each service being responsible for its API, middleware and
database.

Are there any preexisting patterns which seek to combine these two and come up
with an architecture which is midway between those two. A few months ago I had
read an article about Data Oriented Architecture on HN which comes close
though I'm wondering if there are others.

~~~
bencollier49
> which includes your API, Middle ware and then Database

Layered microservices are an antipattern. In most cases, functionality is best
divided by domain.

~~~
thinkharderdev
This is why I struggle with microservice architectures. It seems like there is
a basic contradiction. On the one hand, it's vitally important that the
microservices are carved into the correct modules otherwise you get a
nightmare of operational complexity where simple functional changes require
coordinated changes across multiple services. But defining the correct way
modules requires a bird's-eye architectural view of the entire system, which
seems contradictory to idea of self-organizing, independent teams. I can see
how it works when the right way to divide things up is obvious or when you are
dealing with IaaS or PaaS services, but in a complex business domain who
decides how to carve things up?

------
rvz
Still waiting for Monzo's following blogpost on cutting down their outrageous
number of 1500 microservices [0] and moving some back into monoliths. I'm not
sure if I would be too excited over the number of microservices if there is a
degree of complexity involved here. That is just too many here.

[0] [https://monzo.com/blog/we-built-network-isolation-
for-1-500-...](https://monzo.com/blog/we-built-network-isolation-
for-1-500-services)

~~~
okal
Why do you think/feel that there are "too many"? What's the threshold for an
acceptable number of microservices? (Not asking this to be confrontational.
Just curious, because it's a sentiment I've seen before, without the reasoning
behind it being articulated.)

~~~
sweeneyrod
One per developer seems like a fairly loose upper bound.

~~~
overlordalex
Even then this is risky - if that developer is hit by a bus do you throw the
service away and have another developer write it again?

We recently had an interview candidate say this when we questioned the wisdom
of having over a thousand microservices: some in languages that only the one
developer maintaining them used! For me this is insane, but I digress

Monzo says that they have 800 people, and 1500 services. If we're generous and
say 500/800 are developers, then each developer is responsible for 3 services!
A team of 6 would have 18 projects in their domain.

~~~
rkangel
There is a classic tradeoff here between top down organisation dictates giving
consistency vs engineering independence giving flexibility.

Two organisations that I know of who favour the latter are Spotify and
Netflix. It has benefits - different languages are good for different jobs and
engineers like to be able to choose their tools.

It would be bad if this was taken too far, and something was written in a
language only one person knows, but that problem already exists with the
technical knowledge if something only has one mantainer.

------
brootstrap
"voices of experience pointing out that most decisions are made based on the
best information available at the time."

funny in my experience with digital and particularly larger corporate
jabronis. The people who makes decisions are fucking McKinsey consultants who
know nothing about the actual project, are only contracted for 6 months, then
they are gone. Rinse and repeat, maybe one out of every 3 or 4 attempts
somebody actually gets it right and the project doesnt completely fail.

------
hexmiles
> Also, a proper solution for true fault isolation would have been one
> microservice per queue per customer, but that would have required over
> 10,000 microservices.

I'm a bit confused, they seem to imply that they need a microservice per
costumer/destination, but you generically have one instance (aka process) per
costumer not an entire separate codebase. The article seem to use the same
term for two different concepts. Or i am missing something?

~~~
mping
Sounds like a job for Erlang

------
mcansky
there is a related post on Segment's blog : [https://segment.com/blog/goodbye-
microservices/](https://segment.com/blog/goodbye-microservices/) which helps
to get a bit more details and contexts

update : and related video
[https://www.youtube.com/watch?v=lv5o3qnQu5w](https://www.youtube.com/watch?v=lv5o3qnQu5w)

------
boffinism
That final paragraph is pretty brutal. Are engineers really so reliably
obnoxious?

~~~
7777fps
In general, yes.

Everyone seems to have their preferred style of coding, and it is an easy
defence mechanism, when presented with anyone who tries and finds it wanting,
to say that "Well they didn't do it properly".

You find that with Microservices vs Monolith, Strong types vs Weak types,
Exception Handling vs Results, Agile vs Waterfall.

People fragment into camps which turn into echo chambers and it's easy to
dismiss anyone who doesn't commit to that cult as being unpure and not worthy
of being in the cult anyway.

------
bpyne
The takeaway is about trade-offs. They made a rational decision to improve
fault isolation by dividing the app into smaller building blocks managed
separately. After working with it for a while, they realized the higher
operational overhead made the architecture a bad choice for them. So they went
back to a monolith architecture and tried to do fault isolation within the
boundaries of that architecture, which might have made fault isolation not as
good as in the micro-services architecture but it was acceptable.

It's incredibly tough to know the full effect of a trade-off on your
organization until you start going down that path.

We're early in the process of adopting a micro-service architecture. With only
a handful of services so far, I can already see how a team of two is going to
spend a lot more time with operational issues and debugging.

------
seanpquig
It seems like a lot of the issues were around sharing code and libraries,
resulting from isolated codebases per service and the versioning hell of
shared libraries.

I work in a org that migrated to microservices over time, but intentionally
adopted a monorepo approach as part of it. It works quite well and seems to
avoid a lot of the pain points expressed here, while also gaining the benefits
of microservices.

There are definitely tradeoffs to the monorepo approach. It makes development
on shared libraries more delicate and stressful, however this can be mitigated
by more robust cross-service CI, and I definitely think it's a worthwhile
tradeoff to the painful cycle of shared lib versions diverging across services
and finding issues when some service finally gets around to upgrading its
version weeks/months after shared lib changes.

------
alyricalgenius
Problem with micro-services is that it comes with a huge amount "that's the
RIGHT way to do it" and infinite articles talking about what they are and
developers fighting over if your services are too big or too small.

That usually results in abandoning the effort to actually map the use case of
your particular application, model your services to the size that make sense
to your project... Any big enough system will need some services or workers
beyond a single monolith, it doesn't matter if they say they follow micro-
services, if they follow any other type of SOA or whatever, these silver
bullets are killing engineering. Every project needs to take time to be
planned, thought it, refactored, analyzed. If you read a bunch of shit on HN
and go applying you end up with a random monster.

------
freedomben
Others have covered most of my thoughts but I haven't seen platform mentioned.

If you already have a platform like Kubernetes/OpenShift (preferably with a
service mesh) micro services make a lot more sense to me and can be done well.
It gets easy to deploy and scale independently, but still have very low
latency communication with a strong security model built in.

If you are deploying everything to completely independent
platforms/infrastructure, I get a lot more conservative with "what should be
its own service" and what shouldn't. Building a distributed monolith (a bunch
of dependent services that aren't reusable/composable) is the worst of all
worlds.

------
hinkley
> "If microservices are implemented incorrectly or used as a band-aid without
> addressing some of the root flaws in your system, you'll be unable to do new
> product development because you're drowning in the complexity."

Which is nearly verbatim what some of us have been telling you since before
the term microservices was coined.

Coupling is the problem. Yes, microservices add friction to coupling, but they
don't prevent it. Coupled microservices exist (boy howdy), and they're
resource intensive, resistant to evolution, or both.

------
deltron3030
This older "breaking up the monolith" GraphQL talk from Prisma is interesting:
[https://invidio.us/watch?v=_MmyTahR9ok](https://invidio.us/watch?v=_MmyTahR9ok)

Especially if you consider RedwoodJS, a new full stack JS framework that's
build on Prisma technology (their stuff is an alternative to Rails Active
Record ORM). My takeaway is that they provide a similar monolith like
experience by acting as a glue between different services.

------
leogout
> One of the key takeaways was that spending a few days or weeks to do more
> analysis could avoid a situation that takes years to correct.

Exactly my point when I was working on a new project architecture and we had
to choose between two authentication methods. Because once all your
applications rely on an authentication system you can't just switch to the
other one like that... Sadly we did not take the time six month ago and today
we are working on a migration which could have been avoided.

~~~
mynegation
Why don't you share your insights?

------
HeroOfAges
I wonder why they are so easily able to work with and integrate external
services, but they couldn't work with and integrate with their own internal
services. I think it has to do with the fact that the boundaries to the
external services are well defined and enforced because there is a physical
aspect to it. Perhaps they lack the will to create and enforce those
boundaries between their internal services.

------
d--b
It doesn't seem like their end architecture is anywhere near what they had at
the beginning... It's a lot smarter, and it sounds like it's a monolithic
distribution system that manages hot-swappable services. So the whole thing
seems to fall in the "micro services where we need them" kind of architecture.

They just went from naive monolith, to naive micro services, then to smart
coupling of the two...

------
ryanthedev
Poor design. Sharing code between microsercices is always a design smell.

You are just building services on top of another monolith...

Sounds like you needed to abstract the work being send to the worker, instead
of abstracting the worker around the work.

Meaning don't have many workers for one payload type. Abstract the payload and
have a single worker...

That's why most systems become complex and spaghetti. Poor abstraction, so you
use shared code to fix it...

------
phodge
My org is starting to migrate from a PHP Monolith to Microservices as a way of
freeing ourselves from PHP.

Microservices will require more time spent writing interservice APIs, and code
execution will be slower since many procedure calls will require data
serialization and network requests. But we believe it will be worth the
overhead to not be locked into PHP for every new component of the project.

------
sida
Might be a stupid question, it wasn’t clear to me in the article.

Did they go back to a monolith service or a monolith repo? It really just
sounded like monorepo

------
neya
Microservice is not an organizational problem. Microservices is a design
problem. If you implement DDD (Domain Driven Design) first to your application
and then start to design the application around your DDD concept, then, it
might work.

But, it's extremely hard even to do that. Microservices simply complicates
things if any of your domains need to share code with each other. Many DDD
paradigms exist to address this, but none are practical. For example,
authentication related code. IF one domain sets a cookie and the other one has
to rely on that to keep the user (a shared model between the two domains)
authenticated, then this means, you need to duplicate code bases across two
domains or in the very least put them into some sort of shared helper/library,
which DDD is kind of against.

That's why it totally makes sense to go Monolith first and really identify the
parts of your application that are slowing you down either development wise,
testing wise or performance wise and put them into separate contexts.

Phoenix actually does Microservices right. From all the way to scaffold
generation to instructing best practices on keeping your domains properly
separated. But even then, I've burnt my finger many a times trying to write
simple CMS solutions into mutliple microservices then going back to monoliths
again.

------
resca79
I think that microservice approach is good when you can share those services
between many applications In most cases microservices can be the new premature
optimization because you can start to lose the focus on the product itself and
think about the the microservice as a product

------
pennyintheslot
If you're up for a humoristic take on microservices, you could have a look at
this video:
[https://www.youtube.com/watch?v=y8OnoxKotPQ](https://www.youtube.com/watch?v=y8OnoxKotPQ)

------
revskill
Whatever monothlic or microservice, it's all about modularity. Monothlic or
microservice is just the implementation on how you achive modularity.

So the problem here, to me, is how you design the modularity for your system,
not how you implement it.

------
stepanhruda
> There is now a single code repository, and all destination workers use the
> same version of the shared library.

What if I told you this is unrelated to microservices? You can keep all of
their sources in the same repository, sharing dependencies.

------
acd
Another advantage of monoliths is speed. Running on a Local L1-L3 cache will
be orders of a magnitude faster than serializing network round trip and
deserializing json in micro service.

Performance per watt/dollar of computing.

------
StreamBright
I don't blame them, microservices requires discipline, careful planning, in
depth monitoring and debugging capabilities and whole different mindset. Very
few companies can implement it successfully.

------
debrice
"Monolith vs Micro-services?" is philosophy at this point. It's a good brain
exercise, a great starter for debate, it reveals a lot of insights... all
because there is no actual answer.

------
viach
That may sound cynical but if we agree that software development is a show
(bet, entertainment, whatever) for investors in 99%, then microservices are
good to keep people excited.

------
patsplat
Is it about process space and programming languages? Or is it about source
control and CI architecture?

How is a monolith different from a monorepo?

------
danielrhodes
Before going full microservices, try writing your code in a hyper modularized
way and see if this doesn’t solve your problem first.

------
zerotolerance
Having read their articles a few times, the issues that they were attributing
to microservice architecture were really CI problems.

------
87zuhjkas
I think we can have both architectures Monolith and Microservices peacefully
co-existing, with their own pros and cons.

------
Legion
Monolith vs microservice feels like a higher level case of the expression
problem.

------
patsplat
Note that one big database can decimate productivity as well.

------
mikejulietbravo
I saw a talk with Alexandra from Segment before. When it makes sense to go
with monolith it makes sense.

When it makes sense to use microservices, it makes sense.

Doing anything for no reason whatsoever never makes sense.

That is all.

------
crimsonalucard
I have an idea. Bring the concept of microservices into software!

Within software module interfaces that can only communicate with one another
via socket like serial interfaces with no type checking!

Or simply have all your software modules running as forked processes on the
same hardware and have them all communicate with one another via sockets or
http. That means every software module must be it's own server!

To further imitate microservices, make sure that code in one software module
can never ever be moved to another software module. Make it hard to reorganize
things. Also make sure teams can only ever work on one section of the code
base.

Does the above make any sense to you? If it doesn't make any sense to you it's
probably because code organization using microservices doesn't make any sense
period because the examples above are literally doing the same thing in
software that is done in hardware.

If it does make sense to you, then why are you using microservices to add
extra complexity to the code? If you can do the same in software then you'd be
doing the exact same thing as the hardware equivalent minus the extra
complexity of multiple containers or VMs.

Don't use hardware to organize code, use code to organize code and use
hardware to maximize performance.

------
crimsonalucard
Microservices in terms of code organization was always a redundant concept.

You can organize code with functions and namespaces, why do you need hardware
to segregate code? It only makes the segregation permanent but offers nothing
else beneficial in terms of code organization.

The underlying reasoning was always that developers tend to move outside of
boxed software modules if it wasn't enforced by hardware so the modules will
end up not being modules but everything will be blurred into monoliths.

I always figured that if you want really hard lines drawn between software
modules you can still do the same stuff in software itself, like why do you
need actual silicon or VMs/Containers to do it?

The only real need for different services is performance, otherwise all the
benefits and downsides of microservices can be replicated in software.

------
crimsonalucard
I bombed an interview before because I said microservices can be really bad.

~~~
DonHopkins
Maybe it's because you weren't able to explain why you thought microservices
can be really bad, or your explanation didn't hold water. So what was the
explanation you gave them, or do you not want to say?

Or perhaps there are other reasons you're simply not recognizing? Like your
sarcastic tone?

------
pulse7
TL;DR: "If microservices are implemented incorrectly ... you're drowning in
the complexity."

~~~
the_mitsuhiko
I want to see the microservice architecture where you don't drown in
complexity.

