
When Should I Break My Application into Multiple Containers? - fatherlinnux
http://rhelblog.redhat.com/2016/03/16/container-tidbits-when-should-i-break-my-application-into-multiple-containers/
======
BenoitP
Am I the only one that loves the monolith? Apart from the scaling mentioned in
the article, this feels like accidental complexity. And, let's face it, 95% of
apps are about serializing complex business problems into code.

Also, I don't agree with:

> Since these components are doing different things, there is little chance
> that there would be a performance benefit

Complementary workloads should not be a problem. It would actually make
thrifty usage of the hardware.

The only reason I see for breaking the app this way would be to enforce low
coupling.

~~~
dayjah
Monoliths have their role in the world, certainly! I've heard Facebook have a
mantra of sorts about their monolith, if anyone here could corroborate that
would be great. It goes something like:

\- build v1 in the monolith

\- do users like v1?

\- if so, does v1 use too much of the monolith?

\- improve the implementation within the monolith

\- does it still use too much of the monolith? Break it out into its own stack

I believe there are only two stacks:

\- ad serving, where literally lower latency is more money and thus benefits
from optimizations not realistic within the monolith,

\- and "everything else".

~~~
ghaff
Martin Fowler has argued for something similar. [1] I suspect that the "right"
answer is complicated and based on things like where you're coming from, the
size of team(s), and the degree to which there are well-defined shared
services.

DHH of Basecamp also argued recently [2] that Monoliths can make sense
although I noted at the time that, with a team size of about 12 engineers,
that isn't much larger than the two-pizza team sizes you see promoted for
microservices functions.

[1]
[http://martinfowler.com/bliki/MonolithFirst.html](http://martinfowler.com/bliki/MonolithFirst.html)
[2] [https://m.signalvnoise.com/the-majestic-
monolith-29166d02222...](https://m.signalvnoise.com/the-majestic-
monolith-29166d022228#.3npyzyfsr)

------
markbnj
When we first started containerizing back end components a couple of years ago
I spent a lot of time thinking about containers this way: how many pieces of
my app should go into each container? After doing it for awhile and paying
attention to evolving best practices I realized that I was looking at it
wrong. Containers are not a large-granularity box into which servers are
packed. They're sandboxes for processes. Individual containers have very
little overhead. They are not analogous to VMs, instances, or anything at that
level of abstraction. They are absolutely more analogous to chroot. So for
example, in the beginning I might have created a "logging service" with
logstash, redis, and kibana running in a single container. Now I would have
one container for each of those processes, assembled into a pod or cluster.
This becomes nearly a mandatory design pattern when you start using
orchestration systems like kubernetes, because they work best and most
flexibly when the lifecycle of a given container matches the lifecycle of the
single process it hosts. If the process dies the container transitions out of
the running state and is restarted. This gets a lot more complex and harder to
reason about if the "readiness" state of the container relies on multiple
processes running in its sandbox.

~~~
cpitman
Exactly, every container developer configuring their own init process/process
watcher is a recipe for zombie systems that are sort of running but totally
broken.

------
eternalban
Funny. Someone please let Redhat know that Struts is framework [1] built on
top of Sun's JEE Web Profile [2].

[1]: [http://stackoverflow.com/questions/1636238/difference-
betwee...](http://stackoverflow.com/questions/1636238/difference-between-
apache-struts-and-java-ee)

[2]: [https://jaxenter.com/introducing-the-java-ee-web-
profile-103...](https://jaxenter.com/introducing-the-java-ee-web-
profile-103275.html)

aside/ Sun Microsystems really blew it on the pedagogical front. Hopefully
future tech leaders will reflect on SMI's failure to take advantage of (imo)
20+ years visionary head start in design & architecture.

~~~
jsight
I'm not really seeing the issue. Struts is built on Java EE (which was implied
if not explicitly stated). The backend in his scenario was built explicitly on
JAX-RS (a Java EE component). He could have been a little more explicit, but
I'm pretty sure that wasn't the point of his article.

------
dayjah
Posts like this are interesting, I felt it started out about "I'm going to
teach you about docker" and then shifted into a piece about general technical
decision making; and being pragmatic.

Docker has fast become my world: from local development and detailing of dev
setups through to deployment of apps using ElasticBeanstalk's single-container
Docker pattern.

The dev setup piece is seeming the most valuable currently. That you can land
in a project that I've been in and issue `docker-compose build`, `docker-
compose up` and have a fully functioning development server without the need
of installing a half dozen dependencies locally really does feel like a
positive.

As a devopsy-leaning person the "single binary" deployment it lends toward is
beautiful too. Pragmatism always though: for many of our services the network
IO impact is undesirable, so we use go (statically linked binaries) on bare
metal for those.

~~~
tylfin
Is this network IO impact still an issue?

[http://stackoverflow.com/questions/21889053/what-is-the-
runt...](http://stackoverflow.com/questions/21889053/what-is-the-runtime-
performance-cost-of-a-docker-container)

------
annnnd
Great question, not so great answer. I think the answer boils down to:
"depends on your deployment needs". If you want to scale application across
different machines, break it in pieces. Ditto if you want to replace
containers on upgrade (hot swap). But if you only use Docker to simplify
deployment of interconnected parts which never function one without the other
- don't break it in pieces.

I know the last advice goes against the common wisdom, but in my experience
this is so.

------
atemerev
Around 4-6 months before you'll need to run multiple copies of your services
in different servers/VMs for scaling or HA reasons.

"Multiple copies" meaning "more than 2", excluding master-slave configuration.

But at this stage, containerization is practically begging to be done,
becoming a pressing need. If you have to ask the question, it is probably too
early for that. :)

------
willvarfar
I am a bit surprised there is no mention of privilege separation and security.

------
bliti
_…yes, if your application / service has good separation of code,
configuration, and data, installs cleanly (as installer scripts can make this
whole process difficult), and features a clean communication paradigm – it
does make sense to break the application up / allocate one service per
container._

Amen. Treat your software as the sum of it's parts rather than a bunch of
parts that will do.

------
fpoling
Splitting application into containers greatly increases isolation between
components minimizing a chance of turning a bug into exploit.

For example, if a component can communicate over Unix sockets, then one can
put it into own container with networking disabled. Similarly one can
selectively choose which part of filesystem is accessible with read or read-
write access.

------
cpitman
As I read it, this blog isn't about microservices vs monoliths. Microservice
doesn't mean "more than one process" or even "very small processes".

From the microservice perspective a single service is supposed to be a
vertically integrated set of functionality all the way from data to the UI,
instead of horizontally segmenting API services and UI. Horizontal
segmentation doesn't give the same team-scaling benefits since it still takes
significant inter-team coordination to deliver functionality.

So the example is really a single "service" that just happens to include
multiple processes.

------
merb
Actually Microservices is also about team size. It makes no fucking sense to
break a Application into Microservices when your team consists of five
members. Actually Microservices have their place but Monoliths, too.

Also Containers will some day have their place but at the moment they are just
not ready for 70 % of the workload. it put's just too much complexity on your
system. Also Microservices add a lot of complexity, too, which is easy too
handle if your team has a certain size, but without prober tooling, management
services it's really really hard to do it.

------
zaro
Embrace madness and be honest with yourself :

Never!

Use Go and build a single static binary that does everything and f _ck this
fancy container technology , that split this into that, sandboxes it and and
other security sh_ t.

Disclaimer: I don't work at Google, RedHat, DHL or anything remotely
associated with Go, Containers , shipping.

------
exabrial
Only when you need to because it will save you time managing things. So...
this is probably never for 98% of us that aren't writing Google sized
applications.

If you have multiple applications, I'd definitely put each one in a container
to beef security up.

~~~
fatherlinnux
I tend to agree :-) Hopefully, you guys read my comments in the comments
section of the article. Quite honestly, I am skeptical how often it makes
sense to "force" splitting up an application. I hope that was clear in my
article...

------
atomic77
For me, the answer for the given scenario was so straightforward and obviously
"yes" that I wondered why a whole blog post was necessary?

This now has me wondering if I am a bit more drunk on the container koolaid
than I may have thought!

~~~
fatherlinnux
Read the email thread that inspired me to write it and it will become obvious.
In the original email, a user asked if they should split an application that
was almost exactly what I describe in the blog post, into multiple containers
because you "should" only run one process per container. That is a ridiculous
reason. That provides zero business value in and of it's self.

The two main business values I have found from splitting something up is 1.
independent scaling and 2. independent development teams. Other than that most
is preference and philosophy. You can get to the right answer for the wrong
reasons, it doesn't mean you were right.... :-(

------
cbsmith
How about... whenever your application has multiple processes? ;-)

~~~
xj9
Exactly this. Sometimes I will have multiple entry points (app, celery, &c.),
but each process is always launched in a separate container.

~~~
fatherlinnux
I would argue, service, not process :-)

~~~
cbsmith
You could certainly make that argument, but I've struggled to think of cases
where processes aren't the right point of container separation.

~~~
fatherlinnux
Nginx, Apache Prefork, Apache MPM, and PHP FPM all have multiple processes,
and I would NEVER try to split up subprocesses of any of these into multiple
containers.

It's quite simple if something communicates through shared memory, it should
probably be in the same container. If it doesn't then that's NOT a reason to
keep it in the the same container, but I don't see how it justifies the work
of splitting it up....

~~~
cbsmith
Yeah, worker processes are the only thing I can think of. Even then, It is
often more flexible to just run one process containers and have more of them.
Then you can allocate containers more flexibly.

Absolutely for anything that uses shared memory for IPC, but then you
basically have threads pretending to be processes. ;-)

------
z3t4
Another thing that you should consider is that You will become the maintainer
for several of the dependencies.

