
Experimenting with CoreOS, CloudFormation, confd, etcd, and fleet - marceldegraaf
http://marceldegraaf.net/2014/04/24/experimenting-with-coreos-confd-etcd-fleet-and-cloudformation.html
======
marceldegraaf
Author of the article here; ping me if you have any questions or comments :-).

~~~
krschultz
I am wondering why you have a 1:1 mapping between Docker containers and
processes? I have not used Docker but my impression of it was a good way to
group a few inter-related processes together while separating them from
unrelated processes. Our thought was basically 1 container per role/service in
our platform (app servers, workers, DB, logging, etc). If you have 1 process
per container, isn't it just a bit stronger form of sandboxing a process?

~~~
danudey
You can do it in several different ways, and it really depends on what you're
trying to accomplish.

If you were trying to build wordpress-as-a-service, for example, you might
create a Docker instance which includes nginx, php-fpm, memcached, and maybe
even mysql (which makes it easy to make snapshots of the entire state of the
system).

That said, fleet has a really interesting way of making interconnected
systems, so you can create a memcached instance, an nginx instance, and a php-
fpm-running-wordpress instance, and you can manage them separately. This makes
it easy to upgrade memcached without touching nginx or php-fpm, or to change
the nginx config without having to affect the running application server (and
risk flushing the APC cache), etc.

It also allows you to make everything pluggable in your design. You might want
to add varnish in front of your nginx instance, for example, but sometimes you
might not. That sort of flexibility is really useful in some circumstances.

In our infrastructure at work, the design I would use would probably be
similar to this; we have front-end services (nginx), internal services that
they proxy to, internal services that those services access, asynchronous
processing services, dispatch services, activemq, redis, memcached, mysql,
etc. Having the flexibility to add more of any one of those things without
having to scale any of the rest is really important.

For other circumstances, it makes a lot more sense to bundle everything into a
faux-VM. Run supervisord to launch ssh, nginx, php-fpm, memcached, and
whatever else needs running. Share volumes to local filesystems so that
configs, local changes, etc. can be taken care of, and you're good to go.

So in the end, the real question is what do you want out of your containers?
Flexibility or simplicity?

------
jedberg
I saw a demo of CoreOS yesterday, and was super impressed. I'd heard of it
before, but never really understood the power until I saw the demo.

CoreOS is basically an OS with the fundamentals to force you to "do cloud
right".

If I were building a company right now on AWS (or any other cloud) I'd
seriously be considering basing my infrastructure on CoreOS.

~~~
rdtsc
> CoreOS is basically an OS with the fundamentals to force you to "do cloud
> right".

So what are some of those?

~~~
dvanduzer
one of the better summaries (came out of Heroku, I believe):
[http://12factor.net/](http://12factor.net/)

the hardest part is always shedding the baked-in habits.

~~~
beat
I love the 12 Factor App! I recommend that link all the time to people. It's
kind of like functional programming, in that once you grok the fundamentals,
it changes your attitude completely.

------
tonetheman
While all this sounds cool it is a tiny bit frustrating that do not have an
ISO to boot. I found at least one other person looking at it, but I could not
figure out why that was not an option.

It almost sounded like this is an OS but not one you install? Or somehow the
OS is only in memory? Like tinycore maybe?

~~~
matteotom
I don't think it's quite at the point of being stable enough to provide an ISO
yet.

Also, while it is possible to run on bare mettle, it would not be most
productive to do so in most cases. One of the strengths of CoreOS is the
ability to quickly add new machines to a cluster and manage applications not
on the individual server level, but as collections of services running via
Docker. This means, unless you have quite a few computers ready to scale, it
would work best on something like AWS or Google Cloud Platform.

~~~
robszumski
For small to medium clusters cloud providers work great. As deployments of
CoreOS get larger, PXE booting will probably become the most popular way to
add capacity. Plug in 10/100/1000 servers, turn them on and instant capacity
added. The upper end of the target scale for CoreOS consists of companies that
own their own datacenters or have many points of presence.

~~~
matteotom
True, but they still would not be using ISOs.

