
DigitalOcean Partners with CoreOS for Large-Scale Cluster Deployments - dedene
http://techcrunch.com/2014/09/05/digitalocean-partners-with-coreos-to-bring-large-scale-cluster-deployments-to-its-platform/
======
waffle_ss
This image uses the CoreOS Alpha channel, which is not supposed to be used for
production[1]. It "closely tracks current development work and is released
frequently" so I would be using it with the knowledge that things might break.
In other words, CoreOS on DigitalOcean should only be used for trying out
CoreOS and not for running production apps (for now). But if I were going to
do that, there is already a Vagrant setup[2] that is super easy to use.
Hopefully DigitalOcean will provide a CoreOS Stable image soon.

On the subject of DigitalOcean images, there was a severe Docker bug[3] the
last month or so that made Linux kernel 3.15 unusable. Linode let me easily
select a 3.14 kernel to use for my host OS to get around the bug, but
DigitalOcean doesn't have that level of granularity. So DigitalOcean either
needs to provide more fine-tuned configuration of images or provide a CoreOS
Stable image before I would think of using it for production Docker
containers.

Finally, CoreOS is still an enormous pain[4] to install on Linode, so I hope
this gives Linode a strong nudge to make it easier to install there.

[1]: [https://coreos.com/releases/](https://coreos.com/releases/)

[2]: [https://coreos.com/docs/running-
coreos/platforms/vagrant/](https://coreos.com/docs/running-
coreos/platforms/vagrant/)

[3]:
[https://github.com/docker/docker/issues/6345](https://github.com/docker/docker/issues/6345)

[4]:
[http://serverfault.com/a/620513/85897](http://serverfault.com/a/620513/85897)

~~~
lsllc
Vultr have supported CoreOS for a while (and FreeBSD!)

[https://coreos.com/docs/running-coreos/cloud-
providers/vultr...](https://coreos.com/docs/running-coreos/cloud-
providers/vultr/)

~~~
devicenull
And any other x86 OS that supports VirtIO :)

------
michaelsbradley
The article _How To Set Up a CoreOS Cluster on DigitalOcean_ [1] (written by a
DigitalOcean employee) fails to mention what seems to me to be a serious
security-related concern.

Since droplets with private networking enabled are on the same private network
as other customers' droplets, then if "$private_ipv4" is specified for "addr"
and "peer-addr" in cloud-config, isn't it critical that etcd be secured with
TLS and client cert authentication?

See: _CoreOS – Etcd: Reading and Writing over HTTPS_ [2]

I realize that delving into that aspect of coreos/etcd configuration is beyond
the scope of an introductory "how to" article, but I believe that some strong
mention should be given to this concern.

I made a comment[3] to this effect on DigitalOcean's website.

[1] [https://www.digitalocean.com/community/tutorials/how-to-
set-...](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-
coreos-cluster-on-digitalocean)

[2] [https://coreos.com/docs/distributed-configuration/etcd-
secur...](https://coreos.com/docs/distributed-configuration/etcd-security/)

[3] [https://www.digitalocean.com/community/tutorials/how-to-
set-...](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-
coreos-cluster-on-digitalocean?comment=17485)

~~~
akbar501
The article you linked to provides security for etcd.

However, what is the standard approach for securing Docker container to
container communication across hosts. For example from an app server to a DB
server.

Is IPSec setup within CoreOS network layer, or is the security provided by
Docker? If so, what are the options?

~~~
michaelsbradley
I don't think CoreOS does anything special in this regard.

It should be possible via cloud-config to change the runtime config of the
docker service[1], in which case one could set "\--icc=false"[2] to enforce
stricter rules about inter-container communication on a particular docker host
(e.g. a coreos droplet).

[1] [https://coreos.com/docs/launching-
containers/building/custom...](https://coreos.com/docs/launching-
containers/building/customizing-docker/)

[2] [https://docs.docker.com/articles/networking/#between-
contain...](https://docs.docker.com/articles/networking/#between-containers)

EDIT:

Okay, I see you were asking about regulating network comm between containers
on separate docker hosts, i.e. coreos instances.

That's a good question! I still don't think CoreOS addresses that concern in
any special way at the level of iptables and routing (but I could be wrong).
What it does give you is the ability to control service affinity with respect
to your fleet "units". That way, you can be certain that docker containers
which need to be "linked" in order to communicate properly (e.g. you have set
"\--icc=false") will run on the same host.

------
beigeotter
You can find the DigitalOcean tutorials on using CoreOS here:
[https://www.digitalocean.com/community/tutorial_series/getti...](https://www.digitalocean.com/community/tutorial_series/getting-
started-with-coreos-2)

~~~
HorizonXP
This was useful, as I couldn't figure out how to add the cloud-config files I
use for my Vagrant-base CoreOS cluster.

Here's another question: If I have a droplet up and running already, anyone
know how I might change it from Ubuntu to the new CoreOS image? I'd rather
change it than create a new one to maintain the same public IP, or else I have
to have my DNS records updated, which takes time, and is outside my direct
control.

~~~
tedchs
CoreOS is not a drop-in replacement for Ubuntu; migrating to it requires re-
deploying your services inside Docker containers. I would think you might want
to run your service implementations in parallel and then cut over later.

Can you not move IP's between Digitalocean droplets?

~~~
tsileo
It's impossible to move IP's between DO droplet, it's a big drawback I think.
It prevent a lot of people to use DO over AWS EC2.

------
STRML
This is actually _really big news_ for anyone running or interested in running
a Docker-based PaaS system such as Deis or Flynn. DigitalOcean's cheap
instances are a great match for Docker containers.

As of Deis 0.8.0 it only runs on CoreOS, and I believe most other DIY PaaS
systems are moving the same way.

IMO Docker + etcd is a far more sane configuration than endless Ruby Chef
scripts, or worse, Amazon OpsWorks.

~~~
wastedhours
I'm really out of touch with Docker and CoreOS, so please forgive my ignorance
if this is a ridiculous Q: could this combo be used for spinning up machines
with specific apps for thin clients? Or is it more about scaling one app?

~~~
drcode
Essentially it lets you run 20 super-lightweight VMs on a single machine...
Each one of those could be running different apps, and Docker makes it really
easy to build a custom machine image using a single script (called a
Dockerfile) using any standard linux OS as a base.

So yes, powering 20 thin clients running different apps from a single server
is a perfect use case.

~~~
gregory90
What about isolation between apps? I have app A connected to database A, and
app B connected to database B. Is there a way to deny connections from app A
to database B etc?

What I want is to make groups of containers that can talk only to each
other(only to containers within one group). Does CoreOS provide something like
that? Maybe kubernetes? What are possible options?

~~~
drcode
Docker has a sophisticated system for controlling what ports are open on each
container and which other containers it can "see" when it uses these ports.

------
jmbro
Digital Ocean doesn't load the kernel from the current system image, but
instead uses a prestored external kernel associated with the image. This means
that upgrade to the kernel from within the droplet (e.g. distribution security
updates) are ignored (See
[http://digitalocean.uservoice.com/forums/136585-digital-
ocea...](http://digitalocean.uservoice.com/forums/136585-digital-
ocean/suggestions/2814988-give-option-to-use-the-droplet-s-own-bootloader-)).
There is a workaround using kexec (see
[https://www.alextomkins.com/2013/11/digitalocean-debian-
kern...](https://www.alextomkins.com/2013/11/digitalocean-debian-kernel/)).
Does any body know if a similar approach would work for CoreOS ,given their
whole image update process, or whether the DigitalOcean/CoreOS team have
already taken care of this some other way?

~~~
Nux
WHAT?? They still haven't fixed this? Unbelievable.

~~~
yrro
Yeah, and their Debian kernel images are, by now, years out of date.

------
rb2e
The post on DO's blog maybe be more informative:
[https://www.digitalocean.com/company/blog/coreos-now-
availab...](https://www.digitalocean.com/company/blog/coreos-now-available-on-
digitalocean/)

------
kapilvt
one nice unrelated thing that didn't make any of the blog posts, digital ocean
now supports userdata when launching instances via console or the api! but it
looks like they still need to update their other os images to install
cloudinit.

~~~
raiyu
When we began the work to integrate with CoreOS we saw that it was a perfect
opportunity to build out the metadata service, which is why we decided to
delay that initial launch until we rolled out this service.

After we've had a chance to work through some of the bugs that customers will
uncover that we've missed in our testing we'll move forward to updating the
rest of our images for this new metadata service and launching it publicly for
production consumption for all customers.

Thanks

------
jimmyfalcon
I remember attending a talk given by the CEO a few months ago. The strong
point of CoreOS is for hosting application servers because it is does auto
restart / updates rather than hosting can-not-go-down systems such as
Databases.

This is exciting to me from a technological standpoint.

1\. One of first large public projects written in Go (after docker) 2\. One of
the first large public projects using Raft. (consensus algorithm aimed to
replace Paxos)

I am really looking forward to seeing how this project turns out. Personally,
I wouldn't move any of my projects onto CoreOs for at least a few years.

Other than that, I always question how they plan to make money. Consulting
model?

------
polvi
Links to docs, etc, here: [https://coreos.com/blog/digital-ocean-supports-
coreos/](https://coreos.com/blog/digital-ocean-supports-coreos/)

------
cdnsteve
So this means DigitalOcean, when running CoreOS via Docker for your
deployment, means you no longer need to worry about OS level updates? Is this
now handled by DigitalOcean?

~~~
totallymike
CoreOS has a pretty great update mechanism. You can read about it here:
[https://coreos.com/using-coreos/updates/](https://coreos.com/using-
coreos/updates/)

In short, CoreOS maintains two OS partitions, A and B. When an update is
available, it is automatically downloaded to the B partition, and upon reboot
the B partition becomes active, effectively rotating the partitions.

------
ebarock
Marketing, marketing and marketing...

Digital Ocean is doing lots of advertising but their servers are not holding
the traffic.

I had my website hosted with them and I was literally unable to connect on it
via SSH due to the low quality of their link.

I was disappointed with DreamHost, moved to Digital Ocean, now I am testing
Linode.

~~~
threeseed
Pretty sure it's something related to the quality of your link. I had no
issues in the past SSHing from Australia no less into DigitalOcean US DCs.

Also whilst you are testing Linode have a look into their behaviour the last
few times they were hacked. They deliberately withheld information from their
customers. Pretty despicable company if you ask me.

------
avinassh
Their official blog announcement:
[https://www.digitalocean.com/company/blog/coreos-now-
availab...](https://www.digitalocean.com/company/blog/coreos-now-available-on-
digitalocean/)

------
fishnchips
Hurray! I've been really waiting for that for the last few months. I remember
there being a huge thread about it in the DO community.

------
notastartup
What benefits do you get with CoreOS support? I don't understand from just
reading the article maybe a real world example makes more sense.

