
Show HN: CoreOS, a Linux distro for containers - polvi
http://coreos.com/
======
justinsb
The combination of service discovery and containerization is incredibly
powerful for distributed applications. I love the idea that I can simply start
a Docker container, and it can then discover its configuration and self-
configure, rather than having to use Chef/Puppet/whatever.

To my mind, this is the missing answer to "how do I actually use Docker?"

I'm particularly excited by the idea of having a cluster of machines self-
configure; normally this is incredibly painful, relying on multicast (not
normally available on the cloud) or some ugly hacks (like using S3).

~~~
alperakgun
for example.. if i m deploying a rails apps, how would coreos make it possible
to install web servers, databases and rubygems? all through dockerfile
descriptions? an app level getting started would be great..

~~~
jon-wood
I just threw together a really simple proof of concept which uses a shell
script to spin up a Docker instance running NGinx, and then register the port
that instance is listening on with Etcd. Because it's quite so simple, here it
is:

`INSTANCE=$(docker run jonwood/nginx-node) && curl
[http://127.0.0.1:4001/v1/keys/nginx-
nodes/$INSTANCE](http://127.0.0.1:4001/v1/keys/nginx-nodes/$INSTANCE) -d
$(docker port $INSTANCE 80)`

You'd then use Etcd's tree listing interface to grab the JSON fragments for
each of those NGinx nodes, and configure Varnish or HAProxy or something to
hit those backends.

Obviously this isn't anywhere near production ready (for example if one of
your NGinx instances goes down there's nothing to remove it from the pool),
but so far I'm impressed with what CoreOS is doing.

~~~
ideal0227
"for example if one of your NGinx instances goes down there's nothing to
remove it from the pool" For now, you can set up a node with TTL and update
it. If the server goes down, the node will disappear. We are going to provide
a client that will keep long connection with the server.

------
srgseg
For those utterly confused by this story, CoreOS is for running Containers.

Containers can be thought of as way of packaging an entire runtime environment
which is more lightweight and more universally deployable than creating a
virtual machine image.

This one slide explains it well:
[http://www.docker.io/static/img/about/docker_vm.jpg](http://www.docker.io/static/img/about/docker_vm.jpg)

~~~
dman
How does the "appropriate" sharing of bins / libs take place?

~~~
jtgeibel
The containers can share a file system layer which is mounted read only.
Without write access the containers can't interfere with each other and there
only needs to be one copy of these shared files on the host.

With something like AUFS (like what docker does) a R/W layer can be placed on
top of this allowing the guest container to modify its own files without
impacting the underlying shared read only layer.

------
philips
Brandon from CoreOS here. Check out the ec2 docs here:
[http://coreos.com/docs/ec2/](http://coreos.com/docs/ec2/)

------
gexla
Geez, the rabbit hole gets even deeper. This is all great. Docker has been
moving at "ludicrous speed" from the beginning and the ecosystem developing
around it has been doing the same.

I'm itching to play with etcd also and hopefully it can gain more momentum
than Zookeeper or Doozer did.

------
stock_toaster
Is it kind of like SmartOS but with Linux instead of Illumos and without
DTrace and ZFS?

~~~
pyotrgalois
I think so. However I don't know if this is meant to be used on bare metal.

~~~
philips
We would like to run on bear metal eventually.

Targeting virtualized hardware keeps our testing matrix small right now and
makes it easy for people to try it out.

~~~
themckman
Being unfamiliar with how one creates a Linux distro, can you go into a little
more detail as to the difference between targeting virtual hardware over
physical hardware?

~~~
philips
Sure! There are a few differences:

1) Installation: On physical hardware you need to provide a way for people to
initially boot and then install the software on disk. Sophisticated people
want something they can boot over the network and script. Everyone that is
just trying it out wants a GUI; which means you have to build something that
finds disks, helps the user partition them and installs the software.

On virtual hardware there is usually no install step: you just get a VM image
and run it. This lowers the barrier to people trying something out.

2) Hardware support complexity: Physical hardware has hundreds of different
devices that come in thousands of different combinations. Some of it requires
3rd party drivers and custom configuration at boot time. Supporting everything
ever made adds a lot of complexity.

I think it is a bit comparable to the complexity that ORMs have of running on
top of 20 different databases and still providing a useable interface that
doesn't break.

In the virtual hardware space there are only about 4 sets of Kernel drivers
that you need to support all of them and they are generally well tested.

3) Test cycles: It is easy to turn on and off virtual machines for testing and
the developer feedback loop is tight for even the tricky bits like our boot
code (>60s). However, on physical gear it can take 5+ minutes to test out a
single iteration because of copying over the new binaries, slow booting disks,
etc.

4) Customer debug cycles: Without easy access to the same physical hardware a
user has I have to build a debug image, have them install it, then give me the
debug output. This cycle can take days.

Something like the Open Compute hardware can reduce a lot of these pain
points. Also, in a given year a majority of servers have similar hardware so
you can work on a lot of gear with minimal additions if you curate.

------
brandonhsiao
Can someone please explain what a container is? Googling 'container' doesn't
seem to give me useful or relevant results.

~~~
philips
A Linux machine has the Kernel which deals with the hardware and then a
filesystem with everything required to run applications. A container is a
Linux machine without the Kernel.

As an example: with containers you can run a database that needs a CentOS
environment on the same physical machine that is running an application server
that expects a Debian looking environment. All without the performance hit you
would take from VMs.

~~~
nzgrover
So... it's a chroot jail?

~~~
philips
In a way. However, you can run a full machine from init on down and fully
isolate networking, process namespaces and resources like memory or CPU.

~~~
gonzo
Like VPS in FreeBSD 10

------
4ad
So the Linux crowd now reinvents SmartOS... Good, I guess.

~~~
routelastresort
Enjoy(ent) the Saab of computing, while it's still around.

------
pyotrgalois
I am using docker on my startup. It's a very useful technology. I hope that
coreos is as good as docker.

I think that anyone interested on this should check
[http://smartos.org/](http://smartos.org/). Coreos and Smartos have many
things in common. I don't know if the creators of docker/coreos have tried
smartos. I think they should. It's always good to check and learn from similar
projects.

------
wmf
This could use some big-picture documentation. Does this run inside or outside
the containers?

~~~
philips
It runs directly under KVM/Xen/Virtualbox/etc. Essentially it is a Linux
Kernel, root filesystem and minimal set of services to be able to launch and
manage containers.

~~~
wmf
I'm not sure why there's so much emphasis on "yo dawg" double virtualization.
I'd be more interested in PXE booting this thing on bare metal.

~~~
lotyrin
Containers aren't virtualized, they're just isolated from each other. That's
why they're cool.

Instead of working at the IaaS level for each app, you can have IaaS provide a
cluster for containers to run in, and then put up a personal PaaS built out of
containers - have more flexibility, spend less money, app folks don't have to
worry about infra as much.

And in the case that you're at a scale that you can use real hardware (or a
hybrid approach), it's easy to move without the containers having to worry
about it.

------
shykes
And it uses Docker as the package format. Awesome :)

~~~
alperakgun
does it mean docker includes linux package manager function?

~~~
shykes
Docker includes facilities for building, versioning, discovery, distribution
and updates. So although it's very different from traditional package
managers, yes, you can use it to distribute software.

------
zobzu
Oh look, irc channel, docs, etc.. oh and you can't get access to it without
registering to something, with full details, and maybe get elected.

I would think that this is not that hard to make something similar from any
existing distro, with actual build steps, etc.

Ie the "open source way", and not something with probable financial interest.

~~~
polvi
Sorry for the confusion, you don't need to register for anything.

SDK (roll your own) docs here:
[http://coreos.com/docs/sdk/](http://coreos.com/docs/sdk/) Our EC2 images:
[http://coreos.com/docs/ec2/](http://coreos.com/docs/ec2/) 46 repos worth of
source here: [https://github.com/coreos](https://github.com/coreos)

The form is mainly to send t-shirts to the people that try it out for us. We
do have a profit motive, because we want this project to be sustainable.
Similar to Ubuntu and Redhat.

~~~
elehack
Is there a way to receive news updates w/o registering or subscribing via
e-mail? I don't see a Twitter feed, or even an RSS feed for the blog.

~~~
marineam
That issue was much discussed on IRC and someone even filed a bug:
[https://github.com/coreos/coreos-
marketing/issues/1](https://github.com/coreos/coreos-marketing/issues/1)

We will have an RSS/Atom feed soon. :)

------
knotty66
I'd like a distro with ZFS/BTRFS, LXC and KVM, with a user friendly
configuration layer on top. Not necessarily a GUI.

Really, a Linux version of SmartOS. I really like SmartOS but I like to get as
much running in Zones as possible and there would be less friction doing this
with a Linux kernel.

~~~
Andys
This isn't what you wanted to hear, but it gets you really close: Install
ubuntu, lxc, virt-manager gui, and zfsonlinux.

------
gales
Very interesting; can this complement Flynn?
([https://flynn.io/](https://flynn.io/)) or is it in lieu of? Also, can it run
on Open Stack?

~~~
philips
It is a complement to Flynn. CoreOS with "git push to deploy" will be a great
combination.

CoreOS runs fine on top of Xen, VMWare and KVM today. So, a CoreOS image
should boot fine under OpenStack. Send me an email[1] if you want to try
running it under a local OpenStack.

[1] brandon.philips@coreos.com

~~~
gales
Fantastic! I will use this alongside Flynn, on RackSpace public cloud servers.
Appreciate the email address, thanks.

------
frozenport
In these embedded and HPC like applications there is a significant advantage
gained by having the right kernel flags (Preemption, etc).

I would like to see this distro build its kernel from source for most or every
installation.

------
bsilvereagle
If you like the idea of CoreOS but don't like the idea of using Docker
containers, check out bedrocklinux.org.

------
DannoHung
Yowza, I keep being impressed by the alacrity with which Docker based
ecosystem components keep popping up.

------
samstave
I am super excited about this as I am doing OpenStack deployment automation.
With this - I can automatedly deploy all the way out to the app on bare metal
at scale extremely leanly.

------
idan
Has anybody tried to get this running on Linode?

Sorry if that's a n00b question, I'm still fumbling my way around the (ever-
growing) virtualization / devops landscape.

------
alexchamberlain
This looks awesome; any info on the _who_ s behind this?

------
dmix
Security-wise is containerization safer than standard operating systems?
(besides being relatively new and unexploited)

------
visualphoenix
Is docker required/prepackaged? I'd prefer to use vanilla lxc/dhcpcd.

~~~
nvartolomei
From documentation looks like it is only prepackaged, you can use it or not.
Also you can build CoreOS without Docker.

------
dochtman
Ah, yet another awesome thing built on top of Gentoo.

~~~
polvi
We're based on the ChromeOS SDKs... which use emerge to build the binaries
required to assemble the distro. You can think of emerge/gentoo as the
toolchain used to build all the binaries. We also pull base system packages
from upstream portage, then compile them all together in out image.

------
inthewind
Has this got anything to do with Tiny Core?

------
grogenaut
if you're core (totally stripped down os) why do you provide a discovery
service I'm going to override?

------
dschiptsov
Ready meals, yeah?)

I do remember times when there were essentially two choices - Debian or RH.
There was also Suse, but the madness of making everything look like Netware,
with standard, classiesc UNIX tools replaced by some home-brew programs with
dozens of parameters nobody knew. It died long ago, thank god.

The advantage of Debian was that it was de-facto standard academia linux.
Which means more-or-less stable and well tested, while some designs were (and
still) lame. apt is such a lousy mess compared to RPM.)

Then the wave of migration from proprietary UNIXes to cheap Linux systems
began, and RHEL flourished, being the OS of choice if you wish to run Oracle
or Informix (the second was very impressive and still is). RHEL at that time
was actively developed, well-tested, and even went through a painful
transition to NTPL.

Then good people made CentOS from RHEL's sources and nowadays it is still
default choice for some stable, but little bit lagging behind the popular
distros Linux (it is still on 2.6.x kernels)

Then was the raise of Ubuntu. Well, it is popular, which almost never mean
good.) Nevertheless for the wast majority Linux = Ubuntu. Leaving aside the
crazy habit of incorporation of any new shinny crap invented by freedesktop
guys, such as various init, management and settings "services" it is quite
stable, and well-tested, indeed. Btw, comparing to the glorious days of 2.4 to
2.6 migration, or that NTPL stuff, there are almost no problem with core
libraries and tools.

So, does anyone need a new distro? My answer is NO. It is quite easy to reduce
CentOS or even Ubuntu (or Fedora, which is also infected by systemd madness)
to a minimal and stable set of packages. All you need to do is exclude all
Gnome-related stuff with dependencies, keeping image and fonts manipulation
libraries, and X11 libs to be able to recompile popular packages.

The key idea here is begin with already many times tested sources, such as
CentOS .srpm (got through tests by two separate teams) or Ubuntu's packages,
cutting off unnecessary dependencies. Then you will have compatible and well-
tested OS for containers or whatever else sales people call the banal para-
virtualization.

Setting up your own yum repository is a matter of few hours, Debian packaging
is more messy, but manageable. This is what sysadmin's job all about.

Btw, vendors such as Amazon already have done this job, so if you hate system
administration (which is a sign that shopping might be a better future ,) just
re-use these images - it is much better than some new "core OS".

The so-called "minimal install" of Ubuntu is also fine, and all you need to do
is re-compile important packages, such as MySQL the way you like it and place
them to your local repo.

~~~
gizmo686
I think it is worth mentioning that not even Ubuntu is its own distro in the
sense you seem to be talking about it; it is a derivative of Debian, and the
majority of the packages are still maintained upstream.

There is also alot of room for new types of operating systems. CoreOS seems to
be a operating system design that is not present in any other distro that I am
aware of, and definantly not stripped down Ubuntu.

Another interesting OS design I am aware of is NixOS[1], which features a
purely functional package management system.

For the typical use cases, the answer is, as always, stick with the tried and
tested. However there is still plenty of unexplored space that may be superior
for specific domains, and might even become superior (or more likely
influence) the common case solution.

[1] [http://nixos.org/](http://nixos.org/)

~~~
dschiptsov
If vendors does not support your os and do not maintain packages for your os
(which means compile-test-release cycle) it is the same as if it does not
exist.

The curse of FreeBSD is that so-called Linux vendors doesn't take trouble to
support it. There are, for example, no support for Android SDK for FreeBSD,
while, being mostly Java-based it is not that difficult. The emulator and
driver for debugging seems like a complicated thing, but it is not _that_
complicated.

So, when Google or even Percona will do packaging for you, then one could say
that an OS makes some sense.

