
Modules vs Microservices - swah
https://www.oreilly.com/ideas/modules-vs-microservices
======
AtticusTheGreat
I don't have much ideology behind going with microservices vs. monolith, but
what we've done on some recent projects is organize our code into modules that
only communicate with each other through a narrow and well defined boundary
layer. If we need to split a module out into a separate service, then it isn't
nearly as much work to split it out later.

One of the practical issues we've had with microservices that need to interact
with each other in real time is ensuring a consistent state across systems.
For example, let's say I need to change the status of an object and
afterwards, call a separate service to change state there as well. What
happens if the call fails in some way? You can't just run all of this inside a
single database transaction anymore. Now you have to design your code to deal
with several potential failure points and edge cases, which adds complexity.
The other consideration is all calls to a service should be idempotent if
possible. It makes coding from the client side a lot easier if you can just
fire off a call multiple times (in case of local or remote failure) and not
have to worry about state.

Just some of my thoughts, since this stuff has been on my plate recently.

~~~
lobster_johnson
We've used microservices for around 6-7 years now. One thing we realized quite
early was that letting each microservice store state "silos" independently was
a bad idea. You run into the synchronization issue you describe.

Instead, we've moved the state to a central, distributed store that everyone
talks to. This allows you to do atomic transactions. Our store also handles
fine-grained permissions, so your auth token decides what you're allowed to
read and write.

One non-obvious consequence is that some microservices now can be eliminated
entirely, because their API was previously entirely about CRUD. Or they can be
reduced to a mere policy callback -- for example, let's say the app is a
comment system that allows editing your comment, but only within 5 minutes.
ACLs cannot express this, so to accomplish this we have the store invoke a
callback to the "owner" microservice, which can then accept or reject the
change.

Another consequence is that by turning the data store into a first-class
service, many APIs can be expressed as data, similar to the command pattern.
For example, imagine a job system. Clients request work to be done by creating
jobs. This would previously be done by POSTing a job to something like
/api/jobs. Instead, in the new scheme a client just creates a job in the data
store. Then the job system simply watches the store for new job objects.

Of course, this way of doing things comes with its own challenges. For
example, how do you query the data, and how do you enforce schemas? We solved
some of these things in a rather ad hoc way that we were not entirely happy
with. For example, we didn't have joins, or a schema language.

So about a year ago we went back to the drawing board and started building our
next-generation data store, which builds in and codifies a bunch of the
patterns we have figured out while using our previous store. It has schemas
(optional/gradual typing), joins, permissions, changefeeds and lots of other
goodies. It's looking extremely promising, and already forms the foundation of
a commercial SaaS product.

This new store will be open source. Please feel free to drop me an email if
you're interested in being notified when it's generally available.

~~~
sheeshkebab
and how that central 'data store service' is different than a single 'database
service' (rdbms or nosql - CRUD) that all microservices connect to and run
there select/insert/update/delete/crud ops?

Other than api - rest vs whatever binary rpc protocol, it sounds very much
like a standard database...

~~~
lobster_johnson
The difference may seem subtle, but I'd argue that it is a whole other
paradigm. It's one of those things that you either get, or you don't, but it
might take some time to fully appreciate.

First of all, we're not an RDBMS, and don't pretend to be. I love the
relational model, but there's a long-standing impedance mismatch between it
and web apps that I won't go into here. There are clearly pros and cons. Our
data store _isn 't_ intended as a replacement for classical relational OLTP
RDBMS workflows.

If you let all apps share a single RDBMS, you're inevitably going to be
tempted to put app-specific stuff in your database. This one app needs a
queue-like mechanism, this other app needs some kind of atomic counter
support, etc. You may even create completely app-specific tables. How do you
compartmentalize anything? How do you prevent different versions of apps to
stick to the same strict schema? How do you incrementally upgrade your schemas
without taking down all apps? How do you create denormalized changefeeds that
encompass the data of all apps? How do you institute systemwide policies like
role-based ACLs, without writing a layer in stored-procedures and triggers
that everything goes through? Etc. There are tons of things that are difficult
to do with SQL, even with stored procedures.

I would argue that if you go down that route, you'll inevitably reinvent the
"central data store pattern", but poorly.

~~~
theptip
Fowler refers to this as the "Integration Database" pattern, and advises
against it:
[https://martinfowler.com/bliki/IntegrationDatabase.html](https://martinfowler.com/bliki/IntegrationDatabase.html)

The issue with a centralized data store is that your services are coupled
together by the schemas of the objects that they share with other services.
This means you can't refactor the persistence layer of your service without
affecting other services.

All that said, a single source of truth does do away with distributed
transactions, so I can see the appeal.

~~~
lobster_johnson
He seems to come at it from a slightly different angle, and I can see how his
scenario isn't a good idea.

It's worth pointing out that you do have the same challenge in a siloed
scenario, but the "bounded contexts" are separated by the applications
themselves, which no chance of tight coupling because there's no way to
tightly couple anything. In the silo version, apps can still point at each
other's data (e.g. reference an ID in another app), there's just no way of
guaranteeing that the data is consistent.

The coupling challenge is solved by design -- by avoiding designing yourself
into tight couplings.

For example, let's say you desire every object to have an "owner", pointing at
the user that "owns" the object. So you define a schema for User, and then
every object points to its owner User. But now all apps are tightly coupled
together.

In our apps, we typically don't intertwine schemas like that unless there's a
clear sense of cross-cutting. An "owner" field would probably point to an
object within the app's own schema: A "todoapp.Project" object can point its
"owner" field at a "todoapp.User", whereas a "musicapp.PlaylistItem" can point
to a "musicapp.User".

(Sometimes you _do_ have clear cross-cutting concerns. An example is a
scheduled job to analyze text. The job object contains the ID of the document
to analyze. The job object is of type "jobapp.Job". The "document_id" field
can point to any object in the store. The job doesn't care what the document
is -- all it cares about is that it has fields containing text that can be
analyzed. So there's no tight coupling of schemas at all, only of data.)

However... I have played with the idea of a "data interface" concept. Like a
Java or Go interface, it would be a type that expresses an abstract thing. So
for example, todoapp could define an interface "User" that says it must have a
name and an email address. Now in the schema for todoapp.TodoItem you declare
the "owner" field as type "User". But it's an interface, not a concrete type.
So now we can assign anything that "complies with" the interface. If
todoapp.User has "name" and "email", we can assign that to the owner, and if
musicapp.User also has "name" and "email" with the right types, it is also
compatible. But I can't assign, say, accountingsystem.User because it has
"firstName", "lastName" and "email", which are not compatible.

------
100k
I know microservices are successful in many organizations but one downside
I've experienced from the microservices hype is starting an application with
microservices.

It's very difficult to get the system boundaries correct while you're still
iterating on core functionality, and if you get it wrong you're in a world of
pain. Refactoring becomes very hard and performance suffers from unnecessary
network overhead. Deployment is harder. Coordination is harder. Developers
can't run the whole system locally. Testing is harder. Basically, if you don't
have a very well defined interface between components, it's going to hurt.

I would not recommend starting with a microservices architecture. Build a
modular, well-factored application and split out pieces if they need to be
scaled separately or there are other compelling benefits of a microservice.

To quote Kris Jenkins:

This is your return type: Int

This is your return type on microservices: IO (Logger (Either HttpError Int))

Microservices: Know the risks.

[https://twitter.com/krisajenkins/status/762901550696194048](https://twitter.com/krisajenkins/status/762901550696194048)

~~~
nawitus
"Developers can't run the whole system locally."

Developers should be able to run the whole system locally.

~~~
100k
I agree. In the system I'm currently working on, they can't. I think it would
be possible to implement, but no one has been able to take the time (or,
perhaps sees the value in it).

(OTOH, I kind of doubt developers at Google or Facebook run the whole thing
locally, so there must be some kind of end state for this.)

~~~
nawitus
If the alternative to microservices is a monolith, and you can run the
monolith locally, then logically microservices can also be run locally. If
it's difficult to run all the microservices locally then that's just a sign of
weak tooling.

~~~
cloakandswagger
"Just a sign of weak tooling."

Why does technical overhead take a backseat whenever a microservices vs
monolith discussion comes up?

Yes, in a perfect world every org would have sufficient time and engineering
resources to implement microservices for better scalability and code quality.
In the real world, setting up and maintaining microservices has huge technical
overhead, I'd estimate double that of the equivalent monolithic architecture.

If your company isn't flush with cash and the product you're building will
never need massive scaling then it makes no sense to use microservices, at
least from a business perspective.

~~~
nawitus
"Why does technical overhead take a backseat whenever a microservices vs
monolith discussion comes up?"

I don't think it does, but that's kind of off topic.

"In the real world, setting up and maintaining microservices has huge
technical overhead, I'd estimate double that of the equivalent monolithic
architecture."

I agree.

~~~
sbov
If two different solutions don't share the same "off topic overhead", they
aren't off topic.

------
tannhaeuser
Beginning in 2003, and until the height of the SOA craze around 2010, the
Service Component Architecture (SCA) was seen as the holy grail of service
integration. It encompassed both local/in-process (or in-JVM) as well as
networked services (SOAP and REST), was polyglot in that it defined API
bindings for native, Java, and even PHP and Cobol, could access external
services, and was still quite practical. For those needing it, it also
supported authorization and transaction policies and protocols.

When commercial interest in SOA middleware products dropped sharply, further
standardization of version 1.1 slowed down, and Oracle, sitting on the SCA
board, voted down all specs that had been worked on without further
explanation.

To this day I still haven't understood what makes microservices different from
SOA in a technical sense. I can get that the term SOA was probably burnt at
some point, but if there's a real lesson to be learned from SOA failures, I'd
really like to know. Maybe SOA was seen as too complex because it addressed
some of the harder problems such as transaction and authorization
protocols/boundaries, BPM, etc. upfront?

~~~
jwilliams
For a while I was consulting a lot around SOA - particularly around the IBM
stack, which was one of the biggest SOA evangelists in the early days.

SOA was too complex for sure, but it wasn't just that it was complex. It's
that complexity didn't actually deliver the return.

Part of the issue was the way SOA was sold. You buy a BPM or a Rules Engine
and suddenly you unlock all this value and you can compose things on the fly
-- Rules Engines were particularly bad in this respect. Business users were
told they'd be able to tweak rules on the fly. This was never a reality.

Then you get outright hits in terms of complexity. Those BPM systems with two-
phase commit were monsters. In fact they were so complex it was _more_ likely
to fail. When in the vast majority of cases, actual failure rates were rare
and easily handled by a reconciliation process. So the tech never really
matched the need.

On top of that, the consultants cost a fortune. Instead of a few test
environments, you needed 7. Managing test data and rollout to end-users became
a nightmare... And even then, as the technology wasn't mature, you'd hit
hurdles very late. Dismal production performance was a common one.

It's a bit sad. A really interesting set of ideas and technologies - but
really poorly sold and way past the hype to deliver. It's equally true of
related technologies like CORBA.

~~~
tannhaeuser
Re: "rule engines"

I know exactly what you mean, having suffered through JRules projects, but
don't consider rule/forward-chaining languages part of a SOA stack per se (I
especially loved the idea that rule bases, unlike services, don't need
testing, because they're end-user configuration parts, and because, like SQL,
they're kindof declarative).

Reading through the answers, I still don't know how microservices are any
different from SOA ):

------
vojant
I co-founded startup 6 months ago, since day 1 we use micro-services. For us
the biggest benefit was that at the begin we could hire people knowing
different programming languages (we managed to build a team of 5 in 3-4 weeks)
and they could build a small parts of the system communicating via
http/RabbitMQ. Downside is that we had to have a CI&CD from day one and it
costs us some resources.

I am not saying microservices are cure for everything and of course there is a
place for well maintained monoliths but I find that even for smaller teams
micro-services can be just easier than monolith.

~~~
mylons
I don't think using micro-services from day 1 is a wise use of resources. What
if one of your engineers, who probably owns an entire service, quits? And it's
in Haskell because she felt like it?

~~~
staticassertion
This is a totally nutty scenario. Microservices are not "do whatever you
want!" \- they give you the freedom to choose your stack. It is _obviously
still a business decision_ to choose Haskell and you've got other problems if
developers are building things in random languages that they feel like using
without a larger discussion.

The issue here isn't Haskell it's ownership and process.

~~~
mylons
Given how they described their org, it seems like a valid scenario. Extreme
examples are tools for illustrating a point.

Micro services from day one is a premature optimization. Just saying that
doesn't necessarily paint a picture of why it might be.

~~~
cookiecaper
I agree, but I don't think microservices are properly classed as an
optimization of any sort, premature or not. Microservices arise because a
company can't communicate/manage itself internally.

This does not mean that you must have one giant 50MB executable to run your
whole company, but it probably does mean most companies shouldn't have 60
200-line microservices.

~~~
boomlinde
I think that microservices may be an actual optimization when the application
flow has several clearly separable tasks that have varying requirements and
you need to divide the load over several machines. For example, one task may
be mostly I/O heavy, another will use a lot of RAM and a third may mostly be
CPU bound. When you distribute the load over multiple servers, microservices
can make it easier to tailor each server to the needs of the services it runs.
The I/O bound workload doesn't need 100GB RAM and the CPU bound workload may
not need several gigabit interfaces.

That said, I haven't personally worked with a microservice-based architecture
where this ever became a useful optimization. Often it is exactly as you say:
a technological workaround for an organizational problem.

------
bluejekyll
I used to make this argument. I'm not so convinced anymore. Much of this
modularity can be achieved now, split you libraries into two, one for APIs and
the other for Implementations. Then in your build, only include the API
libraries as dependencies, and include both at runtime. It doesn't enforce
runtime modularity, but it's generally good enough.

Why this isn't great, it encourages monolithic mindset. A huge benefit to
microservices is that they are small! This means means that you have releases
that are faster per service, build times that are faster per service, code
that can be more easily reasoned about.

The author is correct that it introduces expensive cross service calls, so you
do need to be thoughtful on your boundaries, but what you end up getting is
simpler on a per instance basis. You don't need to become devops experts;
there are plenty of options for deployment that handles a lot of this: heroku,
gae, beanstalk. Hell, there's even still available to you for your own D.C.,
though I'd argue that if you're running in your own D.C. then you should think
very hard on why that's important to your business (even if it's very large).

The earlier you have microservice architecture built into your stack, the
easier it is to continue on that path, once you're monolithic, it's a huge
amount of work to go back the other way.

~~~
discreteevent
A huge benefit to modules is that they are small! This means means that you
have releases that are faster per module, build times that are faster per
module, code that can be more easily reasoned about.

The earlier you have modular architecture built into your stack, the easier it
is to continue on that path, once you're monolithic, it's a huge amount of
work to go back the other way.

~~~
bluejekyll
Have you considered what will happen when you're application has reached a
muti-gig deployment? Where you have static resources commingled with business
logic? Where you have rendering blended with DB access?

Modular systems are theoretically as good as SOA or microservices, but in
practice they are not.

Failure can not be isolataed as easily, when there is a problem in the system,
the entire thing crashes.

Also to your point; is that a single repo, or multi-repo source control
system? In a single repo system your build and testing cycle becomes longer
and longer for the entire service, regardless he size of changes, you have to
deploy the entire thing. Every successful business with a monolithic
deployment, regardless of repo structure, ends up in the same place (based on
my experience): a large unwieldy beast, maintained by dozens or hundreds, or
thousands of engineers, who each have limited knowledge of the runtime. It
becomes slow, hard to deploy, hard to test, hard to make changes.

Microservices/SOA does not alleviate the need to good design, but it doesn't
allow for the above to be true on a perservice basis, shortening turnaround
for response to issues and deploying new features, it is better.

~~~
paulddraper
Holy crap!

How do you get multi-gb deployments? I mean at some point I suppose anything
is possible but I'm used to binaries on my system being KB or low MB.

~~~
bluejekyll
That qualifies as a microsservice in my book :)

To add more: you get there by not practicing SOA, never breaking up a service,
working at a company constantly ships new product rather than focusing on
restructuring the application, not having developers own both the build and
deployment of the app such that they don't experience the pain; reasons, etc.

~~~
balfirevic
Come on, you can have million lines of code and still get binaries of few
megabytes. If that counts as microservice then everything does.

This is a frequent problem when discussing microservices - nobody defines how
large they actually are.

~~~
bluejekyll
Size is not the only reason, sometimes it's about performance and system
utlization.

Separation of concerns tends to be my strongest indicator for defining service
boundaries.

------
Walkman
"The modularized monolith can be scaled horizontally as well, but you scale
out all modules together."

That's not exactly true. Yes you probably have to ship the whole monolith, but
who said that you also have to get traffic for all the parts? For example you
have /users/ and /books/. You can configure your nodes to only serve /users/
and configure different (maybe more) nodes to serve /books/. The code sits
there but who cares?

~~~
100k
We did this at my last gig. The application was monolithic[1] but ran in
multiple environments: web application, API, admin, backend workers.

It worked quite well. The one downside is we had to load all the code (it was
Rails) so processes took more RAM than strictly necessary. But sharing data
models across all these services was pretty easy.

[1] There were a few separate services, for example a documentation CMS.

------
sreque
I think the article is missing a key value of microservices, or at least
smaller services: service ownership. With a monolith, who is on call for the
service when something goes wrong? How does that person find an appropriate
person to diagnose the issue in one of the 100 libraries included in the
monolith? What does the monolith dashboard look like?

The great thing about small services is that a team developing a service can
own it from top to bottom: including:

    
    
      1) being responsible for metrics, alarms, dashboards, and everything else required to monitor the service
      2) being oncall for the service and getting directly paged when their are problems
    

And of course, there are other social/organizational problems with a monolith,
another example being deployments. I want to deploy my new feature but I'm
blocked because someone introduced a bug in some unrelated library that's
clogging the entire pipeline. Or, I have to release my feature according to
the deployment schedule of the monolith, which may not make sense for my team.
With smaller services, a team can own its own deployment pipeline and decide
when it wants to deploy.

A third organizational benefit of smaller services comes process separation.
GC'ed languages work really hard to help developers pretend that memory is
free, but memory is still a finite shared resource and it only takes one
misbehaving module to cause the whole process to start stalling in large GC
pauses. With smaller services you get process separation which makes the
problem much more tenable. And of course there are other exhaustible shared
resources like threads, and file descriptors.

At the end of the day, I prefer smaller services because I like the social
organization where a company consists of agile, autonomous teams owning their
own services. I feel a monolith service actively discourages that and leads to
social organizations that are less productive and successful.

~~~
eloff
Those are organizational concerns that you're conflating with microservices vs
monolith. They're tangential. I see no reason why you can't assign team
responsibility to individual modules vs microservices. You can collect metrics
for both modules and microservices and publish them in a dashboard. Alarms and
monitoring are probably unnecessary at the module level in most cases - you'd
just do it once for the whole monolith.

~~~
sreque
Good point, but I don't think it's necessarily incorrect to conflate
organizational concerns with architectural concerns. I am reminded
specifically of Conway's law:
[https://en.wikipedia.org/wiki/Conway%27s_law](https://en.wikipedia.org/wiki/Conway%27s_law)

It may be possible to deploy a monolith without having monolithic processes
and organizations in place, but does anyone have experience successfully doing
so in practice? And how easy is it compared to doing so with smaller services?

------
closeparen
For my employer, the _whole point_ of microservices is separate deploys. When
we had hundreds of engineers committing on the monolith, a bad change in one
out of the few dozen commits in a given day's upgrade could require rolling
the whole thing back.

Now each service deploys (usually) one or two commits at a time, completely
understood by the person who clicks the "upgrade' button. People working on
unrelated code don't need to block each other's release velocity.

More disciplined interfaces could have solved the spaghetti problem, but lots
of small services that a few people have absolute power over can move a lot
faster than an integration/release process shared by thousands.

------
dasil003
I've advocated for this approach before, I think it's definitely true that a
single code base is an optimal place work out early architectural decisions so
you can optimize the interfaces and separation before committing to the
overhead of multiple repos and deploy stacks.

One of the challenges, though, is making sure the whole team understands the
vision and doesn't violate the intended separations. Working primarily in
Rails, I've found it to be pretty terrible in this regard, both due to ruby's
global namespace, but also just because there are no conventions to properly
separate app-level modules (Rails Engines are a second-class citizen). Django
is significantly better since it gives you apps as first-class citizens, and
because of Python's explicit module import, but even still you can quickly end
up with a ball of mud if you're not careful.

------
jsiepkes
We build a modular system based on Java with OSGi. It served us well but now
we feel we have reached the limits of such a system and are looking at a more
microservice based approach.

For example a modular system is not going to help you when a module is
misbehaving since the whole thing still runs in the same process (JVM in our
case). If someone introduces a resource leak in some trivial module the whole
thing still comes down.

~~~
SanderMak
Exactly, that's one of the tradeoffs discussed in the article (author here).

Another pattern that might work, which I didn't include in the article, is to
scale-out the modular application (assuming it's 'stateless') into several
clusters. Each node in the clusters still has the whole modular application.
However, each cluster will be responsible for handling a certain part of the
API. Then, put a load-balancer/API gateway in front that can route different
functional parts of your API to different clusters. Scale up the individual
clusters as required by load. Even though all nodes contain all modules,
depending on which cluster they're in, only a certain subset of modules really
takes up CPU cycles. There's still no node-to-node communication necessary,
since all nodes contain all logic.

Certainly not a pattern that's always applicable, but I've used it with
success several times for webapps with REST backends.

~~~
jsiepkes
For us scaling was never an issue; We did the stateless scale-out thing. Even
solving statefull is relatively easy (at least in Java) with solutions like
Hazelcast. The main drawback is there are usually only a few services in your
monolith that are required to scale out. But you have to deploy the entire
thing when scaling out.

As for OSGi, OSGi is complex; I dare to say that the complexity of OSGi rivals
that of a microservice setup. If I had a nickle for every classloader issue I
debugged... ORM was especially fun (For example wrote this piece way back:
[http://www.datanucleus.org/products/datanucleus/jdo/osgi.htm...](http://www.datanucleus.org/products/datanucleus/jdo/osgi.html)
). But I must admit that I don't think we could have created (and maintained)
such a large modular application without OSGi. Debugging itself is also way
more complex. When an issue arises you spent a lot more time tracking down
which module is misbehaving. Even though we had inserted lot's of probes
(which ended up in graphite) and log statements (which ended up in graylog) to
counter that.

In my experience writing smaller, simpler applications (which I acknowledge
also have their own complexity with distributed debugging) are still easier to
understand then an modularized application.

~~~
SanderMak
I'm with you on the complexity of OSGi, though some of the complexity plainly
arises because it truly forces you to modularise vs. just winging it. In that
regard, the new Java 9 module system has less of the service dynamics and
classloading tricks going on. Very curious to see how the community will pick
up the new module system.

------
techwizrd
At the start-up I currently work for, I designed the system to be composed of
microservices (with Swagger/OpenAPI), even when it might've been easier to
write modules, for scalability reasons. Before I was brought on as the first
engineer, we had a series of contractors build a poorly architected monolith
that had issues scaling. Microservices have enabled us to independently scale
different parts of the service as load changes. It's also forced a very strong
separation of concerns.

My main complaints are that we've run into various instances where we need to
access some data in a different service, so we stuff it into some object so
that the next service can pull it from the DB. However this is a solvable
implementation issue.

The other complaints I have are that monitoring and profiling are much more
involved. Additionally, setting up a new service can be painful if you don't
have a template.

Overall, we've had a good experience with microservices and it's enabled us to
deploy faster and scale simpler.

------
the_arun
Few advantages for Microservices over modules is - performance of one module
impacting other modules. 1. Say one module is running slow, that will impact
entire application. 2\. If we want to push code for one module, we need to
push code for entire application. So there is no deployment isolation. 3. We
have to have all the modules written in same programming language and same
version of the programming language. Hence there is no clear path for
upgrading version of programming language.

Micro Services addresses all these assuming we deploy Micro Services
separating the services (including data). But the tradeoff is a bit of
complexity and latency for the benefit of complete isolation and independence.

~~~
icebraining
_1\. Say one module is running slow, that will impact entire application._

Why? Unless you're doing something silly like running the whole service on a
single process+thread, the other workers should still respond just fine while
the slow module is crunching away.

 _2\. If we want to push code for one module, we need to push code for entire
application. So there is no deployment isolation._

Yes and no. In what sense does that worry you? You can still ship only changes
to one module, using version control. And you can still do a gradual
deployment, avoiding downtime. The only thing you can't easily do is take down
a whole module while keeping the rest of the application running.

~~~
the_arun
1\. Because infrastructure resources are shared - like CPU & memory 2\. If all
the modules are managed by same team, yes you are right. If we want each
module to be managed by one pizza scrum team, we need need deployment
isolation to avoid churns. Don't you think so?

------
cpeterso
> Modules are natural units for code-ownership as well.

I read a snarky blog post a few years ago entitled "Blame-Oriented Software
Design". It described legitimate design patterns that software developers in a
Dilbert corporate environment can use to deflect blame and extra work caused
by others' shoddy code. Examples include well-defined module boundaries,
strongly-typed APIs, and extensive assertions and logging enabled in
production code.

Unfortunately, I can no longer find this blog post, but I chuckle about it
often enough that sometimes I consider rewriting what I remember of it myself.
:)

------
eloff
I've worked on both types of projects and I would never choose microservices
over monolith if presented the option.

Rather, I agree with the author that proper separation into documented modules
gives most of the benefits from microservices without any of the numerous
drawbacks.

Microservices seriously increase the operational overhead. They increase the
hardware expenses. They increase service latency through round trip times and
unnecessary work. They increase the code complexity through unneeded
serialization/deserialization and REST calls vs plain old function calls. They
make debugging a lot harder. They give you all the pain of distributed systems
and networking when all you previously needed was a function call.

With Go (and many other statically typed languages) you get proper modules
that can't reach into the private implementation of other modules (even with
reflection.) They can't have import cycles. The compiler will take care of
alerting you when an API changes in an incompatible way. Likely all your
modules will share a backend database(s) so you need to take care to use the
public interface of the responsible module rather than reach around behind its
back with direct queries - but that's a solvable organizational problem.

I wonder sometimes if the microservices craze isn't trying to find a technical
solution to human problems that would be better dealt with via communication
and creating solid organizational practices. Like team A doesn't trust team B
and rather than work out a common set of norms and rules they just start using
microservices to bypass dealing with the problem all together.

------
tybit
For those of you happy with your use of micrsoservices, how many microservices
does your team handle?

I don't mind microservices but I definitely think my company has taken it too
far, for my team of 5 we have ~25 separate components/deployments which make
up ~12 microservices. It makes it impossible to keep track of each service and
be familiar with idiosyncrasies in each project.

------
Xcelerate
In my experience, the biggest obstructions to modularity are the "unknown
unknowns". I can't tell you how many times I've been trying to set up some
software where it turns out the installation was looking for file ABC in
directory XYZ even though the documentation wasn't up to date and listed
directory EFG (or nothing at all). Also, environment variables (i.e., global
variables) are a pain and should be eliminated altogether. What would replace
them? I'm not sure, but _any_ random idea has got to be an improvement of some
sort over the current system.

Docker alleviates the environment hell problem somewhat, so it's easy to set
up a bunch of contained microservices that aren't going to fall apart when the
global environment changes.

We need more research into the complexity of programming patterns. The same
patterns and anti-patterns keep popping up under different names in slightly
different forms.

~~~
Nomentatus
Speaking of random ideas: dump the globals into functions (one function per
global) - that's right a dumb single-variable function. You can't monitor or
pop an assert on a variable, but you can put slap asserts inside a stub
function, etc. Plus you know a function ain't local. Globals are such poison
that the minor inefficiency created by converting a few globals to functions
is usually meaningless.

I begin to think that we need a richer vocabulary to address "complexity." To
analyse it into various forms, (some of which are likely to be beneficial.)
Separating code into lumps means more lumps; but fewer dependencies, more
known absence of dependency... We keep using one word "complexity" to point to
various things. Distinguishing between Complex(opposite of 'simple') and
'complicated' (opposite of 'easy') is a start I suppose, but I can't help
thinking there are more careful distinctions that could be made.

------
sebringj
I even found using submodules in git was really helpful as well even though it
wasn't a module per say. For example in Mongo, I have models in node.js and it
is really simple to share the model in every node project across the org as
mongo models contain schema and validation etc.

------
ajessup
Recently I've seen a few folks use protobufs within parts of a monolith as a
mechanism to enforce a strongly typed interface between components. This helps
define a contract between teams without the overhead of a distributed system.

~~~
mackross
gRPC can do the same and define the service calls.

------
mooreds
I found this a refreshing article that have a nuanced view of different
solutions. Also I was glad it acknowledged that different solutions make sense
at different scales and times in an organization's life.

------
euske
Watching a discussion like this really make me wonder this: what is a module,
anyway? There are obviously varying aspects of modularity, such as:

* different code locations (e.g. functions)

* different namespaces (e.g. libraries / Python modules)

* different release cycles / distribution models (e.g. ES6/Java modules in this sense?)

* different OSes / languages / security models (e.g. microservices?)

Seems like people (including myself) are confused by putting these in a one-
dimensional spectrum. The reality is probably never that simple.

------
bg4
This is why I tend to build microservices that implement a bounded context.
Then compose those different APIs, as necessary, to implement the public
facing API.

I don't need granular level services, it places too much burden on operations,
but I still get the separation of context, concerns and 'anti-corruption'
layers that I'm looking for in my design.

------
im_down_w_otp
gen_server:start_link/3

The original microservice :-)

------
shitgoose
I don't have a TV so somehow I missed the moment when "monolith" became a
dirty word.

------
migueloller
His comments with regards to the lack of compile time checking of the
interfaces between microservices couldn't help me but think of GraphQL. Using
GraphQL as the layer between microservices could alleviate this issue.
Specially if the usual "deprecate, don't version" approach of GraphQL is
followed.

Normally GraphQL is talked about within the context of user facing
applications interacting with servers but it could be quite useful for
strongly typed machine to machine communication.

We already use GraphQL for our client facing API. We're going to start rolling
out microservices at our startup and I think we'll use it for communication
between microservices as well.

~~~
phamilton
I'd recommend taking a look at gRPC.

~~~
migueloller
Just took a look at gRPC. Thanks for the great recommendation!

------
keithnz
Be interested in if anyone has much experience in using an Actor type model
instead of microservices. Things like Akka/.net / orleans / Erlangs OTP etc

------
aryehof
Isn't a microservice just a small application behind a network boundary?
Moving what might be a big ball of mud into smaller ones?

------
tabeth
shouldn't the goal be to do whatever provides the most value and builds your
product or business the fastest? if you waste time with a microservice
architecture where you could've done more, faster with a monolithic app that
seems like a poor use of time.

~~~
akamaozu
"If I had 8 hours to chop down a tree, I'd spend the first 6 sharpening my
axe" \- Abraham Lincoln

If you delivered the initial thing in 4 weeks vs 8 but you'd spend way more in
future development and developer time, would you say you made the right
decision?

To put it another way, delivering the wrong thing faster doesn't make it any
more right.

~~~
tabeth
couldn't you just as easily say you build the wrong thing in more time with
your hypothetical increase in future speed, but you go out of business because
you never have any customers?

what you're saying is the right thing, if you have unlimited money.

------
z3t4
a micro service should be independent.

~~~
rapala
If a micro service is independent it makes no calls to other micro services.
Thus the system has a single service and that service is a monolith.
Contradiction, thus a micro service cannot be independent.

~~~
z3t4
Lets say ycombinator.com was news.y, www.y and apply.y, all served from the
same "monolith". And they decided to make them into separate micro-services
... The best way is to make them all independent!

