
Give Me Back My Monolith - zdw
http://www.craigkerstiens.com/2019/03/13/give-me-back-my-monolith/
======
padobson
I don't think I blame the author at all. I'm not sure why you would start with
microservices, unless you wanted to show that you could build a microservices
application. Monoliths are quicker and easier to setup when you're talking
about a small service in the first place.

It's when an organization grows and the software grows and the monolith starts
to get unwieldy that it makes sense to go to microservices. It's then that the
advantage of microservices both at the engineering and organizational level
really helps.

A team of three engineers orchestrating 25 microservices sounds insane to me.
A team of of thirty turning one monolith into 10 microservices and splitting
into 10 teams of three, each responsible for maintaining one service, is the
scenario you want for microservices.

~~~
mirkules
We’ve done exactly this - turned a team of 15 engineers from managing one
giant monolith to two teams managing about 10 or so microservices (docker +
kubernetes, OpenAPI + light4j framework).

Even though we are in the early stages of redesign, I’m already seeing some
drawbacks and challenges that just didn’t exist before:

\- Performance. Each of the services talks to the other service via well-
defined JSON interface (OpenAPI/Swagger yaml definitions). This sounds good in
theory, but parsing JSON and then serializing it N times has a real
performance cost. In a giant “monolith” (in the Java world) EJB talked to each
other, which despite being java-only (in practice), was relatively fast, and
could work across web app containers. In hindsight, it was probably a bad
decision to JSON-ize all the things (maybe another protocol?)

\- Management of 10-ish repositories and build jobs. We have Jenkins for our
semi-automatic CI. We also have our microservices in a hierarchy, all
depending on a common parent microservice. So naturally, branching, building
and testing across all these different microservices is difficult. Imagine
having to roll back a commit, then having to find the equivalent commit in the
two other parent services, then rolling back the horizontal services to the
equivalent commit, some with different commit hooks tied to different JIRA
boards. Not fun.

\- Authentication/Authorization also becomes challenging since every
microservice needs to be auth-aware.

As I said we are still early in this, so it is hard to say if we reduced our
footprint/increased productivity in a measurable way, but at least I can
identify the pitfalls at this point.

~~~
merb
> So naturally, branching, building and testing across all these different
> microservices is difficult. Imagine having to roll back a commit, then
> having to find the equivalent commit in the two other parent services, then
> rolling back the horizontal services to the equivalent commit

that should not happen. if it does you don't have a microservice architecture,
you have a spaghetti service architecture.

~~~
mirkules
Heh, I like that “Spaghetti Microservices”.

You are right. It should not happen. It is difficult to see these pitfalls
when unwinding an unwieldy monolith, and, as an organization all you’ve _ever_
done are unwieldy monoliths, that have a gazillion dependencies, interfaces
and factories.

We learned from it, and we move on - hopefully, it serves as a warning to
others.

~~~
james_s_tayler
MicroPasta

~~~
cestith
Tangled angelhair.

------
outworlder
I agree with the author to some extent.

The main thing, however, is many people think that, by breaking up their
monolith into services, that they now have microservices. No, you don't. You
have a distributed monolith.

Can you deploy services independently? No? You don't have microservices. Can
you change one microservice data storage and deploy it just fine? If you are
changing a table schema and you now have to deploy multiple services, they are
not microservices.

So, you take a monolith, break it up, add a message broker, centralized
logging, maybe deploy them on K8s, and then you achieve... nothing at all. At
least, nothing that will help the business. Just more complexity and a lot
more stuff that need to be managed and can go wrong.

And probably a much bigger footprint. Every stupid hello world app now wants
8GB of memory and its own DB for itself. So you added costs too. And
accomplished nothing a CI/CD pipeline plus sane development and deployment
practices wouldn't have achieved.

It is also sometimes used in lieu of team collaboration. Now everyone can code
their own thing in their own language without talking to anyone else. Except
collaboration is still needed, so you are accruing tech debt that you know
nothing about. You can break interfaces and assumptions, where your monolith
wouldn't even compile. And now no-one understands how the system works
anymore.

Now, if you are designing a system using microservices properly, then it can
be a dream to work on, and manage in production. But that requires good
teamwork on each team and good collaboration between teams. You also need a
different mindset.

~~~
qaq
Do you have a recommended way of handling transaction boundaries that span
multiple services. Everyone always likes to outline how happy path works and
when it comes to real world it basically comes down to well now you have
eventually consistent distributed system that there is no general valid way to
unroll a change to multiple services if one of the calls fails.

~~~
dudul
[https://microservices.io/patterns/data/saga.html](https://microservices.io/patterns/data/saga.html)

~~~
pojzon
So distributed acid db... Noone till now (from what i have seen) came up with
a better solution than guys 50 years ago..

~~~
dustindiamond
Well, we’ve gotten this far...

------
matt2000
When I started programming professionally it was the era of "Object Oriented
Design" will save us all. I worked on an e-commerce site that had a class
hierarchy 18 levels deep just to render a product on a page. No one knew what
all those levels were for, but it sure was complicated and slow as hell. The
current obsession with microservices feels the same in many ways.

There appear to be exactly two reasons to use microservices:

1\. Your company needs APIs to define responsibility over specific
functionality. Usually happens when teams get big. 2\. You have a set of
functions that need specific hardware to scale. GPUs, huge memory, high
performance local disk, etc. It might not make sense to scale as a monolith
then.

One thing you sure don't get is performance. You're going to take an in-
process shared-memory function call and turn it into a serialized network call
and it'll be _faster_? That's crazy talk.

So why are we doing it?

1\. Because we follow the lead of large tech companies because they have great
engineers, but unfortunately they have very different problems then we do. 2\.
The average number of years of experience in the industry is pretty low. I've
seen two of these kinds of cycles now and we just keep making the same
mistakes over and over.

Anyway, I'm not sure who I'm writing this comment for, I guess myself! And
please don't take this as criticism, I've made these exact mistakes before
too. I just wish we as an industry had a deeper understanding of what's been
done before and why it didn't work.

~~~
pojzon
Its not about the age of the engineers but maturity. Some just dont care about
quality of their work because they get paid either way. Look at Silicon
Valley, "ageism" is real there. They need young devs with ideas and huge skill
to bring them to life, to stay ahead of the competition. Most companies dont
understand that and blindly try to copy that often because their management is
not competent enough.

Reasons for current situation are plenty. World and ppl are complicated.

~~~
purple_ducks
> They need young devs with ideas and huge skill to bring them to life

More likely young devs with naivety and thus motivation.

------
metaphyze
I'd like to point out that microservices are not always as cheap as you may
think. In the AWS/Lambda case, what will probably bite you is the API Gateway
costs. Sure they give you 1,000,000 calls for free, but it's $3.50 per million
after that. That can get very expensive, very quickly. See this hacker news
post from a couple years ago. The author's complaint is still valid: "The API
gateway seems quite expensive to me. I guess it has its use cases and mine
doesn't fit into it. I run a free API www.macvendors.com that handles around
225 million requests per month. It's super simple and has no authentiction or
anything, but I'm also able to run it on a $20/m VPS. Looks like API gateway
would be $750+data. Bummer because the ecosystem around it looks great. You
certainly pay for it though!"

[https://news.ycombinator.com/item?id=13418332](https://news.ycombinator.com/item?id=13418332)

~~~
013a
Worth saying: Now that ALBs support Lambda as a backend, reaching for APIG w/
a lambda proxy makes less sense, unless you're actually using a lot of the
value-adds (like request validation/parsing and authn). Most setups of
APIG+Lambda I've seen don't do this, and prefer to just Proxy it; use an ALB
instead.

ALB pricing is a little strange thanks to the $5.76/mo/LCU cost and the
differentiation between new connections and active connections. The days are
LONG GONE when AWS just charged you for "how much you use", and many of their
new products (Dynamo, Aurora Serverless, ALB) are moving toward a crazy
"compute unit" architecture five abstraction layers behind units that make
sense.

But it should be cheaper; back of the napkin math, 225M req/month is about
100RPS averaged, which can be met with maybe 5 LCUs on an ALB. So total cost
would be somewhere in the ballpark of $60/month, plus the cost of lambda which
would probably be around $100/month.

Is it cheaper than a VPS? Hell no. Serverless never is. But is it worth it?
Depends on your business.

~~~
nostrebored
Right, there are a few use cases for Lambda that make lots of sense, and then
some that don't. If you're not extracting any operational benefits or costs
(think a request that needs to run 5m per hour) from the managed portion of
Lambda then it's probably not for you.

The ALB point is very strong. APIGW can add lots of value with request
response manipulation and the headaches of managing your own VPS, but you need
to make sure that you don't just need a bare bones path -> lambda mapping,
which is where the ALB can shine.

------
jacquesm
Recent encounter: 70+ microservices for a minor ecommerce application. Total
madness and while I'm all for approaching things in a modular way if you
really want to mimic Erlang/BEAM/OTP just switch platforms rather than to re-
invent the wheel. In Erlang it would make perfect sense to have a small
component be a service all by itself with a supervision tree to ensure that
all components are up and running.

~~~
keithnz
I'm always curious why the Actor concept isn't more widely used. Many
platforms / languages have some form of it.

~~~
cestith
I'm equally curious about flow-based programming. Treating things as filters
has worked well for the Unix command line. There's no reason we can't have
processes do sort, filter, merge, lookup, and such in well-defined ways and
chain them together as needed for many kinds of applications. Although
"network" and "port" mean something particular within FBP it doesn't at all
mean the processes can't be on different machines and talking via TCP/IP.

~~~
jacquesm
There is an interesting equivalence between pure functional programming flow
based programming and the actor model.

There is a lot of fancy theory to underpin this equivalence, the essence is
that all of them revolve around (side effect free) transformation.

~~~
scruple
I don't know Erlang and I'm only a couple of months in to learning Elixir in
my (limited) free time. Grated, it's not purely functional, but... I've been
reflecting on this exact equivalence, as you put it, and am happy to hear that
my inclination about the Actor model isn't unfounded.

------
artellectual
I feel like this argument of Monolith vs Microservice is really a discussion
about premature optimization. There is nothing wrong with starting out with a
monolith with great design. Limiting the responsibility of the monolith is the
key I believe to a maintainable piece of software. Should you need something
or your business needs grow to be outside of that defined responsibility
creating a new service should be discussed.

For example I have a service that hosts / streams videos. I have 1 service
that handles all the metadata of the video. Handle users, discussions etc...
one could even think of this as a monolith. Then video encoding piece started
interfering with the metadata stuff so I decided it might be smart to separate
the video encoding into its own service since it had different scaling
requirements from the metadata server.

In that specific case it made a lot of sense to have 2 services I can justify
it with the following reasons.

\- Resource isolation is important to the performance of the application.

\- having the ability to scale the encoder workers out horizontally makes
sense.

So now it makes sense I’m managing 2 services.

There should be a lot of thought and reasoning behind doing engineering work.
I think following trends is great for fashion products like jeans / shirts
etc... but not for engineering.

If you are starting a project doing microservices chances are you are
optimizing prematurely. That’s just my 2cent.

------
faizshah
Check out these two articles from Shopify on their Rails monolith:
[https://engineering.shopify.com/blogs/engineering/deconstruc...](https://engineering.shopify.com/blogs/engineering/deconstructing-
monolith-designing-software-maximizes-developer-productivity)

[https://engineering.shopify.com/blogs/engineering/e-commerce...](https://engineering.shopify.com/blogs/engineering/e-commerce-
at-scale-inside-shopifys-tech-stack)

Specifically relevant to the discussion is this passage:

> However, if an application reaches a certain scale or the team building it
> reaches a certain scale, it will eventually outgrow monolithic architecture.
> This occurred at Shopify in 2016 and was evident by the constantly
> increasing challenge of building and testing new features. Specifically, a
> couple of things served as tripwires for us.

> The application was extremely fragile with new code having unexpected
> repercussions. Making a seemingly innocuous change could trigger a cascade
> of unrelated test failures. For example, if the code that calculates our
> shipping rate called into the code that calculates tax rates, then making
> changes to how we calculate tax rates could affect the outcome of shipping
> rate calculations, but it might not be obvious why. This was a result of
> high coupling and a lack of boundaries, which also resulted in tests that
> were difficult to write, and very slow to run on CI.

> Developing in Shopify required a lot of context to make seemingly simple
> changes. When new Shopifolk onboarded and got to know the codebase, the
> amount of information they needed to take in before becoming effective was
> massive. For example, a new developer who joined the shipping team should
> only need to understand the implementation of the shipping business logic
> before they can start building. However, the reality was that they would
> also need to understand how orders are created, how we process payments, and
> much more since everything was so intertwined. That’s too much knowledge for
> an individual to have to hold in their head just to ship their first
> feature. Complex monolithic applications result in steep learning curves.

> All of the issues we experienced were a direct result of a lack of
> boundaries between distinct functionality in our code. It was clear that we
> needed to decrease the coupling between different domains, but the question
> was how

I've tried a new approach at hackathons where I build a Rails monolith that
calls serverless cloud functions. So collaborators can write cloud functions
in their language of choice to implement functionality and the Rails monolith
integrates their code into the main app. I wonder how this approach would fare
for a medium sized codebase.

~~~
blt
shopify's problem can be fixed without microservices by writing modular code.
The monolith should be structured as a set of libraries. I find it so strange,
the way these microservice debates always assume that any codebase running in
a single process is necessarily spaghetti-structured. The microservice
architecture seems to mainly function as a way to impose discipline on
programmers who lack self-discipline.

~~~
al2o3cr

        The microservice architecture seems to mainly function as a way
        to impose discipline on programmers who lack self-discipline.
    

Sadly we're discovering that while that's the _goal_ , the actual result is
frequently Distributed Spaghetti.

~~~
bigmanwalter
Oh dear, it seems I've lost my poor meatball.

------
scarmig
> It feels like we’re starting to pass the peak of the hype cycle of
> microservices

I feel like any article I see on microservices bemoans how
terrible/unnecessary they are. If anything, we're in the monolith phase of the
hype cycle =)

If you're moving to microservices primarily because you want serving path
performance and reliability, you're doing it wrong. The reasons for
microservices are organizational politics (and, if you're an individual or
small company, you shouldn't have much politics), ease of builds, ease of
deployments, and CI/CD.

~~~
ori_b
There are also two purely engineering considerations: scalability and crash
isolation.

Scalability -- for when your processes no longer fit on a single node, and you
need to split into multiple services handling a subset of the load. This is
rare, given that vendors will happily sell you a server with double-digit
terabytes of ram.

Crash isolation -- for when you have some components with very complex failure
recovery, where "just die and recover from a clean slate" is a good error
handling policy. This approach can make debugging easy, and may make sense in
a distributed system where you need to handle nodes going away at any time
_anyways_ , but it's not a decision to take lightly, especially since there
will be constant pressure to "just handle that error, and don't exit in this
case", which kills a lot of the simplicity that you gain.

Both are relatively rare.

~~~
jules
Microservices are not good for scalability. You want data parallelism, like 20
webservers that all do the same thing. Not splitting your app into 20
microservices all doing something else.

------
bcheung
I've come to the conclusion that microservices work for large organizations
where division of labor is important but for small development teams it
actually makes things worse.

What once was just a function call now becomes an API call. And now you need
to manage multiple CI/CD builds and scripts.

It adds a tremendous amount of overhead and there is less time spent
delivering core value.

Serverless architectures and app platforms seems to correct a lot of this
overhead and frustration while still providing most of the benefits of
microservices.

~~~
ngngngng
I'm on a small team working with microservices. I have different complaints
than yours. The main issue I run into with microservices is I lose the benefit
of my Go compiler. I don't like in dynamic languages because of all the
runtime errors I run into. With microservices, even using a statically typed
language becomes a nightmare of runtime errors.

If I change the type on a struct that i'm marshaling and unmarshaling between
services, I can break my whole pipeline if I forget to update the type on each
microservice. This feels like something that should be easy to catch with a
compiler.

~~~
solidasparagus
If your services need a shared, implicit understanding of types, you're not
respecting the microservice boundaries. Each microservice needs to offer a
contract describing what inputs it accepts and immediately reject a request
that doesn't meet that contract. Then type mismatches becomes very obvious
during development when you start getting 400s when testing against the DEV
endpoint. Don't pass around opaque structs.

------
3pt14159
These conversations always get scattered because people don't post the
experience they have that forms their view.

Me: I've only worked on HUGE systems (government stuff and per-second, multi-
currency, multi-region telephone billing) and on systems for employees with
less than 100 people.

My take: Monolith or _two_ systems _if at all possible_.

This is good: A Rails app that burps out HTML and some light JS.

This is also good: A Rails app that burps out JSON and an Ember app to sop it
up and convert it to something useable. Maybe Ember Fastboot, if performance
warrants the additional complexity.

This is hellish: Fifteen different services half of which talk to the same set
of databases. Most of which are logging inconsistently, and none of which have
baked in UUIDs into headers or anything else that could help trace problems
through the app.

This is also hellish: A giant fucking mono-repo[0] with so many lines of code
nobody can even build the thing locally anymore. You need to write your own
version control software just to wrestle with the beast. You spend literally
days to remove one inadvertently placed comma.

Sometimes you have to go to hell though. Which way depends on the problem and
the team.

[0] Kinda sorta, maybe the iOS app is in something else. Oh and there's also
the "open source" work, like protobuffs that "works" but has unreleased
patches that actually fix the problems at scale, but are "too complicated" to
work into the open source project.

------
mburst
I think for the very large tech companies that have hundreds to thousands of
engineers, microservices can make sense as a way to delegate a resource (or a
set of resources) to a small group of engineers. The issue is that a lot of
smaller companies/engineers want to do things the way these large companies do
without understanding why they're actually doing it. The onboarding costs as
this post mentions is huge. An engineer at a small company likely needs to
know how the entire app works and spreading that over many services can add to
the cognitive load of engineering. The average web app just doesn't really
benefit from the resource segregation imo.

------
ivanbakel
>I’m sure our specs were good enough that APIs are clean and service failure
is isolated and won’t impact others.

Surely if you're building microservices, this line of thinking would be a
failure to stick to the design? If your failures aren't isolated and your APIs
aren't well-made, you're just building a monolith with request glue instead of
code glue.

I appreciate the point is more that this methodology is difficult to follow
through on, but integration tests are a holdover - you can test at endpoints:
you _should_ be testing at endpoints! That's the benefit.

~~~
herval
> If your failures aren't isolated and your APIs aren't well-made, you're just
> building a monolith with request glue instead of code glue.

That’s pretty much every single microservice architecture I’ve ever seen, and
I’ve seen a lot of them :(

~~~
Gibbon1
That's a common thing, trying to solve the problem by moving it from one place
to another.

------
Erwin
On my Samsung TV (and via casting) I have access to 6 streaming platforms:
Netflix, HBO Nordic, Viaplay, TV 2 play, C-more (related to the French Canal+)
and a local service for streaming managed by the national public library
system (limited to loaning 3 movies per week)

Of those Netflix is famous for its complex distributed architecture and
employs 100s (if not 1000s?) of the very best engineers in the world (at
$400k+/year compensation). I haven't heard about ground-breaking architecture
from the others and don't imagine they spend 10s of millions of $ every year
on the software like Netflix does.

I'm not really seeing any difference in uptime or performance. In fact, if I
want to stream a new movie, I will use Viaplay (I can rent new stuff there for
$5), or the library streaming service (which has more interesting arthouse
stuff).

So why is Netflix in any way a software success story, if competitors can do
the same thing for 1/100th the cost?

~~~
jeremyjh
I often go for weeks where HBO Now won't work all or at least much of the
time. I try to watch a movie, it says an error occurred and gives me a trace
ID. I contact support, they ask me to reboot my router. They have no idea what
trace IDs are for. Could I reboot it again? HBO Now still doesn't support 4k.
Netflix virtually never fails for me, is always streaming in high-quality 4k.
Whatever they are doing, it is working and they are operating a scale much
larger than those other players you mention.

~~~
mlthoughts2018
I’ve only ever found HBO Go apps to be more reliable than Netflix. Netflix
frequently takes forever for content to load, especially heavy menus, and
Netflix does a poor job of remembering my place in an episode if I turn it off
and switch devices. Additionally, Netflix aggressively blocks VPN traffic,
even if I am a US-based customer using US-only VPN locations. Never had any of
these problems with HBO apps.

------
alexk
I think that microservices are just a deployment model of the service boundary
and there should not be really a distinction between whether something is
deployed as a microservice or a monolith, because application should support
both for the scenarios when it makes sense.

Consider the following API:

    
    
      UsersService:
       CreateUser
       GetUser
    
      AppCatalog:
       GetApp
       CreateApp
    

What if AppCatalog and UsersService implement both local version of the
interface and GRPC one? Then the distinction whether it's a microservice vs a
monolith goes away, it becomes a matter of whether they are deployed in a
single linux process or across boundaries of processes/servers.

I have implemented this technique in teleport:

[https://github.com/gravitational/teleport/tree/master/lib/se...](https://github.com/gravitational/teleport/tree/master/lib/services)

Integration test suite is run against RPC version and local version at the
same time to make sure the contract remains the same:

[https://github.com/gravitational/teleport/blob/master/lib/se...](https://github.com/gravitational/teleport/blob/master/lib/services/suite/suite.go)

A single teleport binary can be deployed on one server with all microservices,
or multiple cluster scenarios.

where the binary is simply instantiated with different roles:

    
    
      auth_service:
        enabled: yes
      node_service:
        enabled: no
    
    

Is Teleport a monolith? Yes! Is it a micro-service app? Yes! I'm so happy that
we don't have to think about this split any more.

~~~
qaq
The question is transaction boundaries try unrolling a change that had to
touch state of several services and one of the requests failed

~~~
alexk
Right, because we write against DynamoDB/Etcd transactions were a non-option
anyways, and we only have CompareAndSwap as a locking primitive

[https://github.com/gravitational/teleport/blob/master/lib/ba...](https://github.com/gravitational/teleport/blob/master/lib/backend/backend.go#L46)

In addition to that Golang's context

[https://golang.org/pkg/context/](https://golang.org/pkg/context/)

is used to broadcast the failure of a distributed operation and release
associated resources with it

~~~
Thaxll
DynamoDB has transactions now.

[https://aws.amazon.com/blogs/aws/new-amazon-dynamodb-
transac...](https://aws.amazon.com/blogs/aws/new-amazon-dynamodb-
transactions/)

~~~
alexk
It's probably closer to multi-writes though, than transactions in a classic
sense, but good improvement nevertheless.

~~~
Thaxll
It's a real transaction like MySQL or PG.

[https://docs.aws.amazon.com/amazondynamodb/latest/developerg...](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transactions.html)

------
madrox
I wish we could get past microservices as a buzzword. Defining a system
architecture by its size is relatively meaningless.

Ultimately there are principles at play behind whether a service should have
separate infrastructure than another service. If those principles aren't being
critically applied then any decision will be a rough one to live with.

------
jrockway
I think this section in the article sums up where most people's problems lie:

> So long for understanding our systems

You can't "do" microservices by just having some servers that talk to each
other. You have to rebuild the tools that come naturally from monoliths. Can
you get a complete set of logs from one logical request? Can you account for
the time spent inside each service (and its dependants)? Can you run all your
tests at every commit? Can you run a local copy of the production system? With
monoliths, those come naturally and for free. log.Printf() prints to the same
stderr for every action in the request. You can account for all the time spent
inside each service because you only have one service. All your tests run at
every commit because you only have one application. You can run locally
because it's one program that you just run (and presumably your server is just
a container like "FROM python; RUN myapp.py").

When you carelessly switch to microservices, you throw that all away. You
can't skip the steps of bringing back infrastructure that you used to have for
free. Your logs have to go through something like ELK. You need distributed
tracing with Zipkin or Jaeger. You need an intelligent build system like
Bazel. You will probably have to write some code to make local development
enjoyable. And, new concerns (load balancing, configuration management,
service discovery) come up, and you can't just ignore them.

Having said that, I don't think you can ever really get away from needing the
tools that facilitate microservices. Even the simplest application from the
"LAMP" days was split among multiple components often running on different
machines. That hasn't changed. And it's likely that you talk to many services
over the course of any given request -- you just didn't write them.
"Microservices" is just where you write some of those services instead of
downloading them off the Internet or paying a subscription fee for them.

------
jcoffland
This topic reminds me of Conway's law:

Organizations which design systems ... are constrained to produce designs
which are copies of the communication structures of these organizations.

— M. Conway

Microservices probably make sense for large companies which are essentially a
lot of small actors who build up a big system. Medium and small organizations
should probably stay away.

Or another way to think about it, choose a microservices architecture if you
want to employ a lot of devs.

------
iambvk
To me personally, it is not monolith vs microservice that bothers me, but
statefull vs stateless services.

If a service can't assume local state, it creates unnecessary design overhead.
For example, you cannot achieve exactly-once semantics between two services
without local-state. If you replace local-state with message-queues, you just
turned 1-network-1-disk op into 5-network-3-disk op and introduced loads of
other problems.

~~~
cortesoft
If you are relying on local state, you can never scale to more than one
machine.

~~~
iambvk
How do you think Google does?

~~~
jimbokun
By not relying on local state?

~~~
NicoJuicy
Azure has the affinity cookie, which redirects the user to the same instance
if it's a webapp.

------
metapsj
I think there's a middle ground, it's more a domain driven design centric view
of the world. Each domain is a monolithic style application with services that
run in their own processes and communicate via some type of messaging
infrastructure e.g. postgres, redis, zeromq, etc. The critical aspect of this
approach is well-defined message schemas and versioning. The services can be
spun up with a Procfile or built into a container. As you move towards
container based infrastructure, other services like instrumentation,
monitoring, and aggregation of logs are required.

------
gambler
It seems like Erlang strikes the perfect balance between what people want from
both worlds. Scalability and fault-tolerance, but also coherence and
established ways of doing things.

~~~
jacquesm
It does, but it isn't quite as cool (or as good for your job security) to roll
your own, preferably from the ground up without any libraries or other battle
tested code.

------
ascendantlogic
The first part of the hype cycle is "I have a hammer and now everything is a
nail". The second part of the hype cycle is "I need to hammer some nails but
I'm tired of hearing about how great hammers are".

~~~
jrootabega
When all you have is a hammer you spend a lot of time on hacker news reading
about everybody else's hammers

------
beat
Which kneecap do you want to get shot in? I ask this question a lot.

Microservices are trading one sort of complexity (the ball of mud) for another
(configuration). I've found that the win for microservices is largely about
developer efficiency, not code performance or whatever. Keep the developers
from constantly tripping over each other in large systems.

------
trixie_
Too late. Every new person we hire has the best idea to break our system into
tons of micro-services. It'll pretty much happen at this point. Can't fight
the mob.

------
stephen
Everyone has an opinion; mine is around lines of code.

Do you (as in your entire company/maybe eng department) have less than 100k
LOC? If yes, you should stay in a monolith (except for potentially breaking
out very specific performance/storage use cases).

Do you have more than 100k LOC? You should start breaking things up so that a)
teams can own their destiny and b) you can have a technology evolution story
that is not "now we have a single 1 million LOC codebase and we can never
rewrite it".

Evolving ~10-20 different ~20-50k LOC codebases is doable because of the
enforced wire-call API boundaries; evolving 500k-2M LOC monoliths is not,
unless maybe you're Google/Facebook and have their tooling and workforce.

Granted, 20-50k LOC per codebase is probably not "micro".

------
mark_l_watson
I am going to date myself as 'an old guy' here: I used to love building
monolithic systems around Apache Tomcat. I would use background threads
registered to a web app to perform background periodic processing and write
the web interface using JSP (supports a fast edit/test loop). I would build an
uber-JAR file that contained everything including a thin main method to start
Tomcat with my app. Totally self contained (except for requiring a JDK or JRE
install), easy on memory requirements, and very efficient. A bonus: if an app
requires static files, they can be added to a JAR file and opened as resource
streams so `everything` really is in one JAR file.

Contrast this to J2EE that I would only use reluctantly if a customer wanted
to go that way.

------
groestl
We've used a monolithic microservice architecture before and were happy enough
about it. The application was basically structured in microservices, but
developed in a single project (monorepo and all) and the build produced a
single build artifact. At deployment time, configuration decided what set of
services the monolith would boot and expose.

Probably not for everyone (i.e. polyglot is hardly possible and it takes a lot
of discipline to avoid a hairy ball of interdependencies), but it scales in
ops complexity from very small setups to large ones, when needed.

~~~
int_19h
This sounds a lot like what traditional Unix apps would do with fork().

~~~
groestl
Yes, in the sense of busybox I would say.

------
acd
A few things that are harder with Microservices. * A known good consistent
state. How do you freeze and take a snapshot of a micro service in a
distributed state? * Caching, if you use a monolith you could be access the
L1-L3 CPU caches on the local node meaning a very fast access. Accessing local
cache is 0.5-7 ns vs 500 000 ns for a network trip
[https://gist.github.com/jboner/2841832](https://gist.github.com/jboner/2841832)
* Tracing latency. In a monolith you can use performance tracing tools on a
local process and get a good overview. In Micro services you need distributed
tracing tools * More complex architecture with more moving parts which makes
it harder to diagnose for errors. * Memory efficiency as programming language
run times are loaded several times for the different micro services.

Good things about Micro services. * It allows for distributed teams, backend,
frontend teams and to have a common interface json calls to communicate
between the services. * You can replace a micro service with another micro
service * It may be a good fit for startups that needs to rapidly prototype.
That it is good for fast moving startups does not mean it is good for
traditional enterprises.

We are likely beyond peak hype on the hype cycle for micro services.
[https://en.wikipedia.org/wiki/Hype_cycle](https://en.wikipedia.org/wiki/Hype_cycle)

------
NicoJuicy
I've seen a lot of comments here about microservices.

At work we are transforming also, so I'm in the process of setting up a
personal environment for it.

I'm also joining a Hackerspace and pitching for it next week ( hands-on
learning).

About the architecture, not much made "sense" in practice untill I encountered
Akka, which uses the Actor model for creating microservices.

It's seems like a much better approach then everything I learned elsewhere.

Does anyone already have experience with it? ( Ps. Akka.net exist also)

~~~
quasar_ken
I use elixir, same thing. The erlang VM is very powerful and makes separation
of concerns easy. Splitting an app apart is hard because you get boundaries
wrong, but there is no way to scale without adding more complexity somewhere.

------
externalreality
Perhaps its just that, as software developers, we eschew any form of design
that would lead to a maintainable monolithic system (The No Big Upfront Design
Movement may have caused us to throw the baby out with the bath water). Maybe
we just don't yet have the tools and theory to yet to put together a complex
system made up of many individual agents in any easy-to-do way (e.g.
Microkernels vs Monolithic kernels)

Look, we live in a era where the fastest time to market is always going to be
the way to go. Microservices are nice but they slow development down a great
deal.

What we need is an easier, less subjective way to build software. I think
DataFlow programming will become more popular since it is easy, scales well,
and applicable to more domains than many would think.

A monolithic dataflow application has many of the advantages of micro-services
and monoliths alike.

I also think the industry should probably start to shy away from OOP
(especially since industry totally dumped OOD). If you go on github and find a
random C program, then do the same for a random C++ program - I would bet you
can wrap you head around the C far before you can even begin to understand the
C++. How people can revolt against microservices and yet not question the same
phenomena with respect to basic SP vs OOP is again baffling to me.

I think microservice adoption is a heavy-handed approach to modularization. I
very much like Jackson Strucutred Programming, Dataflow Programming etc.
Dataflow is actually applicable to many more domains than some think and are
about as understandable and scales about as well, if not better than
microservices.

------
mattbillenstein
More people should really do in-process services before doing microservices --
the monolith and repo are your unit of deployment and running the thing, but
internally, the services are your way of organizing the code and separating
responsibilities.

An RPC call just becomes a function call - later you can split a logical
service into an actual external service should the need arise. It also makes
identifying which services talk to one another as easy as using git grep...

------
reggieband
> Most of our conversation focuses on scaling the database.

I think one of the emerging principles behind modern micro-service design is
to break out your data model into separate services that hide the database.
You can then publish data changes to an event stream. This can help avoid
requests coming back to a database. I think this is a better approach compared
to heavy caching (e.g. redis / memcache).

I definitely agree that micro-services aren't a silver bullet that solve every
engineering problem that exists but the hyperbole of 150 micro-services is a
straw man argument.

My main annoyances with Kubernetes/docker systems I have encountered is
stability of the cluster and visibility into the health of pods. Both of these
issues were the result of my org deciding to build our own Kubernetes from
scratch and this has turned out to be a significant task.

If I was starting my own company and wanted to develop using micro-services I
would probably use one of the existing off-the-shelf container cloud service
providers (e.g. Amazon/Google/Microsoft). I think that is a better approach
than "build a monolith then lift-and-shift into micro-services later".

------
sheeshkebab
The author is doing it wrong - they don’t need to run a local k8s cluster with
150 services - this is a monolith way and they should have stayed with
mononlyth if they want to do this.

Microservices require quite a bit of dev setup to get it right but often it
comes down to be able to run a service locally against a dev environment, that
has all those 150 other microservicea already running.

Queues are setup to be able to route them to your local workstation, local ui
should have ability to proxy to ui running in dev (so that you don’t run
entire amazon.com or such locally), deployments to dev have to be all
automated and largely lights out, and so on.... it takes a bit of time to get
these dev things right, but it doesn’t require running entire environment
locally just to write a few lines of code.

Debugging and logging/tracing are an issue - but these days there are some
pretty good solutions to that too - Splunk works quite well, and saves a lot
of time tracking issues down.

~~~
kevindqc
For tracing, I tried Jaeger recently and it looks promising!
[https://www.jaegertracing.io/](https://www.jaegertracing.io/)

------
booleandilemma
I think software engineering is inherently cyclical.

Microservices were originated by developers who were fed up with maintaining
monoliths, and in the future the next generation of developers who grow up
maintaining microservices will become fed up with them and move towards
something more monolithic (they’ll probably have another term for it by then).

------
40acres
I'll be honest, I don't understand the difference between what defines a
monolith vs. a microservice. My 'organization' is about 15 developers, and we
all contribute to the same repo.

Visually the software we provide can be conceptually broken apart into three
major sections, and share the same utility code (stuff like command line
parsing, networking, environment stuff, data structures).

Certain sections are very deep technically, others are lightweight modules
that serve as APIs to more complex code. Every 'service' can be imported by
another 'service' because it's all just a Python module. Also, a lot of our
'services' are user facing, but perform a specialized task in an "assembly
line" way. A user may run process A, which is a pre-requisite to process B,
but may pass off the execution of process B to a co-worker.

Is this a microservice or a monolith?

~~~
jacquesm
Microservices are vertically integrated, they have their own endpoints,
storage and logic and do not connect horizontally to other microservices.

A monolith does not have any such restrictions, data structures are shared and
a hit on one endpoint can easily end up calling functions all over the
codebase.

------
agateau
I don't have a strong opinion on monoliths vs microservices, as long as you
don't go overboard with splitting things leftpad-style, but I believe
splittings VCS repositories result in a huge waste of time when making cross-
micro-services API changes.

On the other hand, the king of the hill of VCS these days, git on GitHub, does
not make it easy to have this kind of setup:

\- it is not possible (as far as I know) to checkout a subdir of a git
repository hosted on GitHub), which is annoying for deployment

\- it becomes difficult to only follow the PRs your team is interested in,
since you can't tell GitHub to only notify you of changes in the subdirs you
are interested in.

What are your experiences on this? when you split a monolith into
microservices, do you also split the VCS repository into as many repositories?

------
jaequery
This is a never ending cycle

~~~
davidw
I feel we're about due for another round of

"It makes programming so easy that anyone could do it because it's basically
like writing English!"

~~~
abakker
Thats just Robotic Process Automation. If you haven't seen it, google it :)

------
revskill
To me, the hardest part of software engineering is the domain understanding,
not the engineering part.

------
amluto
I work on a project that is somewhere in the middle. We have one repo that
builds some microservices. We deploy them like a monolith, though. We have
absolutely no compatibility between microservices built from different
versions of the repo, and we have some nice tooling to debug the
communication.

And we have a little script that fires up a testable instance of the whole
shebang, from scratch, and can even tear everything down afterwards. And,
through the magic of config files and AF_UNIX, you can run more than one copy
of this script from the same source tree _at the same time_!

(This means we can use protobuf without worrying about proper
backwards/forwards compat. It’s delightful.)

~~~
JohnBooty
I worked at a company where we did something similar to that once. It was a
nice compromise.

It was a Rails monolith; one of the larger ones in the world to the best of
our knowledge. We (long story greatly shortened) split it up into about ten
separate Rails applications. Each had their own test suite, dependencies, etc.

However, they lived in a common monorepo and were deployed as a single
monolith.

This retained some of the downsides of a Rails monolith -- for example each
instance of the app was fat and consumed a lot of memory. However, the upside
was that the pseudo-monolith had fairly clear internal boundaries and multiple
dev teams could more easily work in parallel without stepping on eachothers'
toes.

------
vbsteven
Same sentiment here. Most clients I work for are small companies with 0 to 5
developers and in those cases I prefer to start out with a monolith so there
is only one codebase and repo to coordinate and everyone is aware how the
whole thing works.

One thing I enforce however is to have a clean separation of layers and
concepts within that monolith (modules and package names) so that if the team
grows and the need arises to break up into separate chunks most of the work is
already done and the boundaries are already defined.

I try to stick with one repo for as long as possible. This makes things much
easier for new developers to onboard and to coordinate or rollback changes.

------
gigatexal
Start monolith. Prove product. Refactor into microservices as necessary.

------
pantulis
There were no silver bullets, there aren't and there won't be. IMHO, I'd bet
you would never hear a construction contractor say "give me back my hammer".
The value remains in the choice of the tools and methodology in order to solve
a problem.

Of course the author's point of view is totally valid, and so the
microservices trend is also valid, and so are solutions in-between. One size
won't fit everyone and as with anything going blindly for any solution can
cause trouble.

~~~
al2o3cr

        IMHO, I'd bet you would never hear a construction contractor say "give me back my hammer".
    

I'd bet they'd say it if half the construction industry had decided that using
wood was "not webscale" and switched to using carbon fiber for everything,
even where it was inappropriate and made things difficult.

------
t0astbread
Every time I read a pro-monolith article it's just "oh you don't need a
microservice arch, monoliths are simpler" and every time I read a microservice
article it's "microservices are more scalable" and both claims sound valid to
me.

Yet I never see anyone talking about how we could combine the two to get the
best of both worlds. It's always just microservices vs monoliths. (Similar
things are happening in the frontend community with JS vs. no-JS debates.)

~~~
mlthoughts2018
I cannot comprehend how someone could believe monoliths are simpler. It sounds
like someone is drastically confused about the difference in kind that exists
between the inherent coupling of monolith / monorepo systems and the utterly
superficial overhead of configuration and individual tooling of microservices
/ polyrepo.

Having worked on many examples of both Fortune 500 monoliths and start-up
scale monoliths, I feel confident saying monoliths just fail, hands down, at
all these scales.

~~~
pjmlp
I have worked in monoliths, implemented across several development sites, with
300 devs on average.

Monoliths only fail when architects don't have a clue about modular
development and writing libraries.

Same architects will just design distributed spaghetti code, with increased
complexity and maintenance costs.

~~~
mlthoughts2018
Even good architects with good ideas about modularity will fail writing
monoliths, because that whole approach to software is intrinsically
antithetical to decoupling and modularity. It’s like asking a professional
soccer player to play soccer on the bottom of a full swimming pool. Doesn’t
matter how good they are because the ambient circumstances render the task
untenable. It’s the same for good engineers asked to work in monolith /
monorepo circumstances. Through outrageous overhead costs in terms of tooling
and inflexibility, the best you can hope for is stabilizing a monster of
coupled, indecipherable complexity, like in the case of Google’s codebase, and
even that minimal level of organization is only accessible by throwing huge
sums of money and tens of thousands of highly expensive engineers at it.

~~~
pjmlp
It is relatively easy to have teams writing modular code.

They just have to learn how to actually use and create libraries on their
language of choice.

Each microservice is a plain dll/so/lib/jar/... maintained by a separate team.

No access to code from other teams, other than the produced library.

It isn't that hard to achieve.

~~~
mlthoughts2018
Your comment makes it clear to me that you don’t understand microservices. The
challenge is not in the organization of simple compilation or code units that
produce libraries, not at all.

The challenge is that in reality you will always need distinct build tooling,
distinct CI logic, distinct deployment tooling, distinct runtime environments
& resources, etc., for almost all distinct services, as well as super easy
support to add new services that rely on previously never used resources /
languages / runtimes / whatever. This need happens whether you choose a
monolith approach or microservice approach, but only the microservice approach
can efficiently cope with it.

The monorepo/monolith approach can go one of two ways, both entirely untenable
in average case scenarios: (a) extreme dictatorship mandates to enforce all
the same tooling, languages and runtime possibilities for all services, or (b)
an inordinate amount of tooling and overhead and huge support staff to
facilitate flexibility in the monorepo / monolith services.

(a) fails immediately because you can’t innovate and end up with some horrible
legacy system that can’t update to modern tooling or accomodate experimental,
isolated new services to discover how to shift to new tooling or new
capabilities. This does not happen with microservices, not even when they are
implemented poorly.

(b) only works if you’re prepared to throw huge resources and headcount at the
problem, which usually fails in most big orgs like banks, telcos, etc., and
had only succeeded in super rare outlier cases like Google in the wild.

~~~
pjmlp
I have developed projects with SUN/RPC, PVM, CORBA, DCOM/MTS, EJB, Web
Services, SOA, REST.

So I think I do have some experience regarding distributed computing.

And the best lesson is that I don't want to debug a problem in production in
such systems full of spaghetti network calls, with possible network splits,
network outage,...

~~~
mlthoughts2018
Your comment about debugging is much, much more applicable to monolith
services than microservices. Digging into the bowels of a monolith service to
trace the path of a service call is brutal, while even for spaghetti code
microservices you can rely on the hard boundary between services (even when
the boundaries were drawn poorly or correspond to the wrong abstractions) as a
definitive type of binary search, as well as a much more natural and
composable boundary for automatically mocking calls in tests or during
debugging when isolating in which component there is a problem.

~~~
pjmlp
With a modular monolith I need one debugger, probably something like trace
points as well.

With microservices I need one debugger instance per microservice taking part
on the request chain, or the vain hope that the developers actually remembered
to log information that actually matters.

~~~
mlthoughts2018
If I worked with you, I would give negative feedback regarding your approach
to debugging. You don’t appear to be taking steps to isolate the problem,
rather just lazily stepping through a debugger expecting it will magically
reveal when a problem state has been entered.

In the monolith case, your debugger is likely to step into very low-level
procedures defined far away in the source code, with no surrounding context to
understand why or to know if sections of code can be categorically removed
from the debugging because, as separated sub-components, they could be
logically ruled out.

Instead you’ll have to set a watch point or something, run the whole system
incredibly verbosely, trip the condition and then set a new watch point
accordingly. Essentially doing serially what you could do in log(n) time with
a moderately well-decoupled set of microservices.

You’d also have the added benefit that for sub-components you can logically
rule out, you can mock them out in your debugging and inject specific test
cases, skip slow-running processes, whatever, with the only mock system needed
being a simple mock of an http/whatever request library. One simplistic type
of mock works for all service boundaries.

To do the same in a monolith, you now have to write custom mocking components
and custom logic to apply the mocks at the right places, coming close to
doubling the amount of test / debugging tooling you need to write and maintain
to achieve the same effect you can literally get for free with microservices
(see e.g. requests-mock in Python).

And all this has nothing to do with whether the monolith is well-written or
spaghetti code compared to the microservice implementation.

~~~
pjmlp
List of employers on my CV speaks for my approach to debugging.

------
vasilipupkin
how can these types of discussions be held in the abstract? the number of
components or services, micro or otherwise, should depend on the specific
application needs.

~~~
jasonm23
Because patterns replace thinking in too many corners of our world

------
ChicagoDave
I’ve noticed a large difference in opinion from Eurocentric architecture to
America-centric. The U.S. seems to favor ivory tower, RDBMS centric systems
and Europe is headlong into domain-driven design, event storming, serverless,
and event driven architectures.

Monolithic design is fine for simple systems, but as complexity and scale
increase, so do the associated costs.

I’m currently using DDD, micro services, and public cloud because complex
system are better served.

~~~
jrochkind1
Hmmmmm, what makes you say "domain-driven design, event storming, serverless,
and event driven architectures" is less "ivory tower"?

"ivory tower" to me means academic, theoretical, "interesting", "pure", vs on
the other end of pragmatic, practical, get-it-done, whatever-works, maybe
messy. (either end of the spectrum has plusses and minuses).

"DDD, event storming, event driven architectures" don't sound... _not_ "ivory
tower" to me. :) Then again, I am a U.S. developer!

~~~
NicoJuicy
It's just basic architecture, no? ( From Europe...)

~~~
jrochkind1
I think many think an rdbms-centric design is just "basic architecture", and
all that "event sourced" stuff is over-engineered buzzword complexity.

It might very well be _useful_, it may be something many more people oughta be
doing if only they knew how valuable it was. Could be! But it certainly does
not seem basic or simple to me. It seems, well, "ivory tower". And something
with a learning curve. Not "basic" at all. (And certainly neither do
microservices).

Do y'all in Europe learn "domain-driven design, event storming, serverless,
and event driven architectures" in school or something? (I don't even totally
know what all those things _mean_ , enough to explain it to someone else).

~~~
NicoJuicy
I learned it after hours ( not at school) and almost everywhere best practises
are applied.

Some SMB's don't know anything, but the developers take "pride" in their work
i think.

Ugly source-codes are everywhere though.

~~~
ChicagoDave
And I picked it up in Accenture’s Technology Group, starting with the success
of this project:

[https://www.accenture.com/us-en/success-performance-
manageme...](https://www.accenture.com/us-en/success-performance-management-
achievement)

------
dzonga
Just watched the YC Amazon CTO talk on Youtube. It seems Amazon has a service
oriented architecture whereby each team run their own 'service' or monolith. I
don't work at Amazon, so someone could probably correct me. Other teams could
adopt that approach not the super microservices / containers that need a dozen
things. Probably some of the microservices could be broken down into functions
that run on lambda.

------
jaequery
the current trend seem to be pointing towards a future where all the backends
are replaced with hosted API services (AWS Lambda, Google Functions, Netlify
CMS, and other emerging headless CMSs).

think that is all we really need. some kind of API-first platform that lets
you run any server-side code coupled with a nice abstracted database layer,
and an admin interface to go along with.

no one has it right yet though. but i think we'll be there soon.

------
sascha_sl
The main thing this article demonstrates is that you shouldn't go into
microservices without knowing the lessons learned at places that have been
doing it for years. Or without knowing why you want microservices.

For us it was a subset of "Production-Ready Microservices" by Susan Fowler.
(It was so comprehensive we didn't need all of the things the book suggests
you implement).

------
kaidax
There are ways to keep the benefits of microservices such as isolation, while
avoiding distributed computing, for example, roles:

Slides -
[https://github.com/7mind/slides/raw/master/02-roles/target/r...](https://github.com/7mind/slides/raw/master/02-roles/target/roles.pdf)

------
zabil
Breaking a monolith application to micro-services looks very enticing. Teams
initially benefit from the rewrite and refactoring process. But in the long
run it can run into same issues of monolith application and maybe more.

Issues like frequent releases, downtimes and breaking changes etc can be
solved by writing tests, testable code, refactoring and keeping the code
clean.

------
alexnewman
I've found monolith's have nearly none of the advantages that people complain
about and nearly all the downsides. I've noticed that people who complain
about monoliths often are actually complaining bout the tech to break up the
monolith

Such as \- docker \- packages \- serverless

I honestly think the problem is devs not taking the time to become comfortable
with tools

------
santoshalper
Microservices is a fad and a poorly named one at that. SOLID principles and
loose couple are a foundation for long term design.

~~~
asaph
Poorly named? I happen to think microservices succinctly describes what they
are: small services each focused on a single task or area, and assembled
together to form a whole system.

~~~
mikkom
I think that's the point OP is trying to make - in real world "micro" services
are not in many cases small.

------
simonhamp
There is a middle ground: the Modular Monolith - package-/domain-driven
monolithic apps that can be split if the need arises.

I wrote about one approach to doing this in the world of PHP using Laravel
(Lumen)

[https://link.medium.com/VA2Vq6zV2U](https://link.medium.com/VA2Vq6zV2U)

------
jwr
I am very happy with my monolith. I've been watching the K8s craze with
amusement.

I will be splitting off pieces of my monolith soon, but docker-compose is a
very reasonable compromise for running stuff, and the pieces I'm splitting off
are for aggregation and background computation, so not really micro-services
at all.

~~~
mooreds
I worked for a number of years on a large webapp. It talked to a couple of
databases and used them as a bus. There were a number of other back end
processes that read and wrote to the database. Not sexy, but solid.

------
shapiro92
the problem is not the division of a monolith into microservices but the over-
engineering of those microservices.

No you dont need docker. My recent .net core project had 3 projects (FE - API
- Dashboard). Each one was deployed with CI to their respective server but
deployment, qa etc was done all outside of a docker env because there was
nothing the env can alter. We knew we develop in windows 10 and deploy/qa in
ubuntu xenial. the CI was configured around that and then send the dlls to the
server and restart apache after deployment.

The only thing that I can see for the need of docker is if you want to include
your database inside also, but we opted-in for an on cloud db (azure) and each
service had its own.

Once again we go to the discussion in which the problem is not the technique
(microservices) but the over-engineering of such solutions.

------
joeyrosztoczy1
I absolutely love the way the Elixir/Erlang + OTP projects (even within
frameworks like Phoenix) decouple code organization / administration from the
software runtime itself.

You can have both ^^ (and every tradeoff whichever extreme you feel more
comfortable veering towards).

------
kashif
My take on monoliths vs microservices here - [https://blog.rootconf.in/will-
the-real-micro-service-please-...](https://blog.rootconf.in/will-the-real-
micro-service-please-stand-up-16bef3ed72ec)

------
jprince
I knew it'd make a comeback!

------
nilsocket
One thing to remember is micro-services aren't meant for small/medium size
companies, it is meant for large companies, where a monolithic application
can't sustain high loads.

------
chpmrc
> Setup went from intro chem to quantum mechanics

Sums up frontend web development nowadays.

------
k__
With serverless, you can have a Monolith.

Frameworks like Serverless or AWS SAM allow you to create a backend where all
functions reside in one repository but get deployed in a way that each of them
scales independently.

------
tomerbd
unless you are > 100 devs a well modularized monolith with emphasis on well
modularized (as if it's microservices inside a monolith) is the best option
usually.

------
EGreg
Is this the Cathedral vs Bazaar discussion?

Basically I am wary of having a package manager pull new stuff unless I pin
the versions to what I personally looked at.

------
igl
People create their opinion from what they read in blog posts like this rather
than their own experience. Take the right tool for the job - over.

------
Quarrelsome
I wonder if this is a pointless ideology argument that isn't about micro-
services vs monolith but actually about tooling. I work on a particularly
nasty monolith and have similar complaints this person has about micro-
services. On boarding is painful, logging is a mess. Performance testing
specifically is a pain (even this monolith has various components and caches)
Added to that we have recruitment issues because its not shiny despite the
company culture being really sweet.

------
haolez
Microservices usually require data redundancy, which goes against the
philosophy of the database that the author is affiliated with.

------
peterwwillis
Some people, when confronted with a problem, think "I know, I'll use
microservices." Now they have 501 problems.

------
weberc2
So how do you scale your monolith? Just run more instances even when your few
interesting routes are the primary bottlenecks?

~~~
mooreds
Exactly. If the option is more servers on one hand, and servers plus k8s plus
specialized skills plus additional deployment and development complexity on
the other, I know which one I'd choose.

~~~
iends
The one that gave you more job security?

~~~
mooreds
Ah, a cynic. :)

Depends on my investment in the company and what the company rewards me for,
to be honest.

Most times I like to work in companies where I'd be rewarded for choosing the
best solution for the company, regardless of job security. For instance, I've
actively fought against language creep at a company because it would end up
siloing developers.

But I'm not naive. If I worked at a company that rewarded me for complicated
architectures, I'd deliver complicated architectures.

------
LifeLiverTransp
Have you read my blog: Grey Goo as a Architecture- Monolith and Micro-Service-
Swarm as artifical antagonism.

------
jmrobertson
on-boarding new devs from the bootcamps, transitioning them to AWS + Lambda is
as if starting from square one in terms of what they were expecting to work
on. Very much a challenge, especially related to getting them to think about
how to leverage good CI/CD for the cloud. Still a lot of knowledge gaps out
there

------
inopinatus
It's a corollary of Conway's Law that your services should be as granular as
your product teams.

------
oneplane
No. At the same time: 6R's. Do what makes sense and measures right, not what
is hip in the moment.

------
purrcat259
Scaling the database link unfortunately 404s. Would love to read the
accompanying blog post.

~~~
craigkerstiens
Thanks for the catch, should be updated now.

~~~
purrcat259
Works now, thanks!

------
eecks
Pretty bad article. Spelling mistakes, no references and broad statements like
"For many junior engineers this is a burden that really is unnecessary
complexity".

Microservices in a monorepo with a proper dev env and build pipelines is just
as "simple" as a monolith. Simple in quotes because I have seen crazy,
sprawling monoliths.

------
hansflying
My personal list of overhyped technologies:

\- TDD

\- Microservices

\- Single-Page-Applications

\- Node.JS

They tends to be used even if there are better options in specific cases.

------
HelloNurse
Microservices are resumé-oriented architecture, like using CORBA 25 years ago.

------
kuwze
You could easily write the same article about single threaded programming.

------
xissy
I doubt if the author had '150 micro-services' in real life. A monolithic that
could be separated into more than 100 micro-services is already too
complicated like hell and engineers live with pains on it.

~~~
hitpointdrew
cough, cough, SAP

~~~
jacquesm
In that case I could actually see some valid reasons for it.

------
discobean
Microservices as just small monoliths

~~~
externalreality
I agree. The popularity of Microservices stems from messy large systems. So
why not just have messy small systems instead.

Why is that people believe they need a Microservice architecture in the first
place? None of the benefits of Microservices are absent in a carefully
designed monolith.

If we are not going to give up our frenetic rapid development practices then
we just need tools that help us move fast while keeping code understandable.
Maybe we just need higher level languages where the machine can just keep
track of all the details from extremely high level specifications. Software is
too hard for humans.

------
edvald
This phrase, slightly paraphrased, was part of what triggered me to found my
startup. "I want my monolith back." It was even a slide in our first pitch
deck.

So I empathize. I do get the motivation behind microservices (or other flavors
of distributed system—I tend to use the microservice term a little loosely).
Too many people/teams working on the same deployable eventually becomes a
bottleneck for collaboration, builds and tests can take a long time for even
small changes, governance and domain-separation becomes harder, and so forth.
You'll also grow to have different SLOs and tolerances for different parts of
your system. For example, logins should almost never fail, but background
workers might slow down or fail without major fallout. Plus different services
may have completely different scale/resource requirements.

Really, the question is: When do microservices become important for you (if
ever)? When is it justifiable to do it presumptively, anticipating future
growth? We all need to make those bets as best we can.

That said, I strongly believe that tooling can lower the baseline cost of
splitting systems into microservices. That was one of our main motivations for
starting garden.io—bringing the cost, complexity and experience of developing
distributed backends to that level, and hopefully improving on it. We miss
being able to build, test and run our service with a single command. We miss
how easy it was to navigate our code in our IDE—it was all in the same graph!
Our IDE could actually reason about our whole application. You didn’t have to
read a README for every damn service to figure out how to build it, test it
and run it—hoping that the doc was up to date. You could actually run the
thing locally in most cases, and not have minikube et al. turning your laptop
into a space heater.

I don’t want to plug too much here (we’ll do the Show HN thing before long),
but we’re working on something relevant to the discussion. We want to provide
a simple, modular way to get from a bunch of git repos to a running system,
and build on that to improve the developer experience for distributed systems.
With Garden, each part of your stack describes itself, and the tooling
compiles those declarations into a stack graph, which is a validated DAG of
all your build, test, bootstrap and deployment steps.

The lack of this sort of structure is imo a huge part of the problem with
developing distributed systems. Relationships that, in monoliths, are implicit
in the code itself, instead become scattered across READMEs, bash scripts,
various disconnected tools, convoluted CI pipelines—and worse—people’s heads.
We already know the benefits of declarative infrastructure, IaaC etc. Now it’s
just a question of applying those ideas to development workflows.

With a stack graph in hand, you can really start chipping away at the cost and
frustration of developing microservices, and distributed systems in general.
Garden, for example, leverages the graph to act as a sort of incremental
compiler for your whole system. You get a single high-level interface, a
single command to build, deploy and test (in particular integration test), and
it gets easier to reason about your whole stack.

Anyway. Sorry again about the plug, but I hope you find it relevant, if only
at an abstract level. Garden itself is still a young project, and we’re just
starting to capture some of the future possibilities of it, but I figure this
is as good an opportunity as any to talk about what we’re thinking .)

------
staticassertion
I've built my personal side project as microservices. I started with an
initial POC in Python and then I had a clear vision for what services to
build.

[https://github.com/insanitybit/grapl](https://github.com/insanitybit/grapl)

> I’d have the readme on Github, and often in an hour or maybe a few I’d be up
> and running when I started on a new project.

I can deploy all of my services with one command. It's trivial - and I can
often just deploy the small bit that I want to.

I don't use K8s or anything like that. Just AWS Lambdas and SQS based event
triggers.

One thing I found was that by defining what a "service" was upfront, I made
life a lot easier. I don't have snowflakes - everything uses the same service
abstraction, with only one or two small caveats.

I don't imagine a Junior developer would have a hard time with this - I'd just
show them the service abstraction (it exists in code using AWS-CDK)[0].

> This in contrast to my standard consolidated log, and lets not forget my
> interactive terminal/debugger for when I wanted to go step by step through
> the process.

It's true, distributed logging is inherently more complex. I haven't run into
major issues with this myself. Correlation IDs go a really long way.

Due to serverless I can't just drop into a debugger though - that's annoying
if you need to. But also, I've never needed to.

> But now to really test my service I have to bring up a complete working
> version of my application.

I have never seen this as necessary. You just mock out service dependencies
like you would a DB or anything else. I don't see this as a meaningful
regression tbh.

> That is probably a bit too much effort so we’re just going to test each
> piece in isolation, I’m sure our specs were good enough that APIs are clean
> and service failure is isolated and won’t impact others.

Honestly, enforcing failure isolation is trivial. Avoid synchronous
communication like the plague. My services all communicate via async events -
if a service fails the events just queue up. The interface is just a protobuf
defined dataformat (which is, incidentally, one of the only pieces of shared
code across the services).

Honestly, I didn't find the road to microservices particularly bumpy. I had to
invest early on in ensuring I had deployment scripts and the ability to run
local tests. That was about it.

I'm quite glad I started with microservices. I've been able to think about
services in isolation, without ever worrying about accidental coupling or
accidentally having shared state. Failure isolation and scale isolation are
not small things that I'd be happy to throw away.

My project is very exploratory - things have evolved over time. Having
boundaries has allowed me to isolate complexity and it's been extremely easy
to rewrite small services as my requirements and vision change. I don't think
this would have been easy in a monolith at all.

I think I'm likely going to combine two my microservices - I split up two
areas early on, only to realize later that they're not truly isolated
components. Merging microservices seems radically simpler than splitting them,
so I'm unconcerned about this - I can put it off for a _very_ long time and I
still suspect it will be easy to merge. I intend to perform a rewrite of one
of them before the merge anyways.

I've suffered quite a lot from distributed monolith setups. I'm not likely to
jump into one again if I can help it.

[0] [https://github.com/insanitybit/grapl/blob/master/grapl-
cdk/i...](https://github.com/insanitybit/grapl/blob/master/grapl-
cdk/index.ts#L65)

~~~
jcims
Grapl looks quite interesting. I'm looking for something similar for public
cloud (e.g. cloudtrail+config+?? for building graph+events). Is there a
general pattern you employ for creating the temporal relationship between
events? e.g. word executing subprocess _and then_ making a connection to some
external service. Just timestamp them or is there something else?

~~~
staticassertion
I think what you're getting at is Grapl's identification process. It's
timestamp based, primarily, at the moment, yes.

A bit of the algorithm is described here:
[https://insanitybit.github.io/2019/03/09/grapl](https://insanitybit.github.io/2019/03/09/grapl)

More specifically Grapl defines a type of identity called a Session - this is
an ID that is valid for a time, such as a PID on every major OS.

Sessions are tracked or otherwise guessed based on logs, such as process
creation or termination logs. Because Grapl assumes that logs will be dropped
or come out of order/ extremely delayed it makes the effort to "guess" at
identities. It's been quite accurate in my experience but the algorithm has
many areas for improvement - it's a bit naive right now.

Happy to answer more questions about it though.

Based on what you seem to be interested I'd like to recommend CloudMapper by
Scott Piper.

[https://github.com/duo-labs/cloudmapper](https://github.com/duo-
labs/cloudmapper)

~~~
jcims
The blog post is super helpful! I think the session concept is the thing I
needed. Thank you!

I tried running cloudmapper but I think I would need to replace the backend
with a graph database and scrap the UI parts. We've got hundreds of AWS
accounts and I'm having trouble just getting it to process all the resources
in one of them.

~~~
staticassertion
FWIW, Scott Piper, who builds CloudMapper, also consults.

Glad I could help.

------
golemiprague
I think there is a place for services right on the beginning if they are well
defined pre existing services like an authentication or chat service. It is an
easy way to add common functionalities and you don't have to maintain the
service itself, just integrating it to the overall system. For the rest of the
more domain specific stuff just build a monolith and extract services out of
it when it feels right. They don't have to be "micro" though, just a service.
It does require some discipline to keep modules as separated as possible so it
will be easier to extract them a service later.

------
marcrosoft
Microservices are a business organizational tool. They literally bring nothing
to table from a technical standpoint.

~~~
baumy
What? This comment seems ridiculous to me. They aren't a panacea and aren't
right in all circumstances, but they have plenty of technical advantages. You
can write code for different services in different languages / on different
stacks, prototype using a new language/technology/stack with a small piece of
the overall application, develop and deploy in parallel more easily, if one
component fails it's less likely to bring down the whole application, gives
you more freedom to scale if certain components of an application require more
resources or different types of resources than others....

That's off the top of my head. These all come with tradeoffs of course, but to
say they bring nothing to the table is absurd.

~~~
Udik
> You can write code for different services in different languages

But wouldn't that mean that the services must have no code whatsoever in
common? And in that case, why would they be part of a monolith in the first
place?

------
briandoll
If you want the "simple" dev experience of a monolith, but the technical
advantages (or just plain reality of your distributed systems) of services-
based architectures, Tilt is a really great solution:
[https://tilt.dev/](https://tilt.dev/)

It makes both development of services-based apps easier, and the
feedback/debugging of those services. No more "which terminal window do I poke
around in to find that error message" problem, for one.

~~~
yjftsjthsd-h
> No more "which terminal window do I poke around in to find that error
> message" problem, for one.

What? Just throw everything in syslog/journal, then stream that to an
aggregator like logstash. Now you can get all logs from one system with
journalctl, and all logs for an environment from kibana.

