Hacker News new | comments | show | ask | jobs | submit login
Show HN: CoreOS, a Linux distro for containers (coreos.com)
417 points by polvi on July 30, 2013 | hide | past | web | favorite | 97 comments



The combination of service discovery and containerization is incredibly powerful for distributed applications. I love the idea that I can simply start a Docker container, and it can then discover its configuration and self-configure, rather than having to use Chef/Puppet/whatever.

To my mind, this is the missing answer to "how do I actually use Docker?"

I'm particularly excited by the idea of having a cluster of machines self-configure; normally this is incredibly painful, relying on multicast (not normally available on the cloud) or some ugly hacks (like using S3).


for example.. if i m deploying a rails apps, how would coreos make it possible to install web servers, databases and rubygems? all through dockerfile descriptions? an app level getting started would be great..


This is where we are going next; CoreOS is super young today though.

We want to build an environment where you run and manage all of those servers and databases inside of containers and via APIs. Or even better skip the APIs and just add nodes and have something like etcd help you configure it dynamically

But, we needed to get all of the foundations solid and that is where we are today.


So, you are not encountered the dependency and version hell yet and just assumed that maintaining all the libs and tools a rails app is implicitly requires is a easy task?) Good luck, then.)


No, that part is already solved. That's what containers do.

I believe philips is saying that CoreOS is still experimenting with and reifying exactly what the layer above containerized applications should be composed of and look like.


So, basically, it looks like as if I move my /usr/local directory between hosts to be run under chroot, using aufs instead of tar.gz, lxc instead of chroot and some wrapper around ssh and scp? but with all that hype and memes? It looks like implementation of FreeBSD's jails with impoverished ports and fancy command-line tool. OK.

Update: what is really interesting is the amount of hype the Docker project created, having no fundamentally new ideas or technological advances, just well-written (we must admit) web pages full of buzzwords.

As a person who ran FreeBSD jail-based virtual hosting in production, I would say that disk and network I/O will be a bottleneck, because either interface does not scale well enough.)


I've been running FreeBSD zfs backed jails for quite some time too (I still think FreeBSD is superior to Linux in many ways). What docker does that ezjails + zfs doesn't do is make it transportable. Sure you could write a script to bundle up your jail but Docker is making it possible to distribute an application orthogonally. Docker is also replete with tools for sharing containers, providing a container server (privately!), and many others...

On performance: containerization > virtualization. Containerization is also not just about performance (people are running it in virtualized environments) it's also about contained configurations.

Your tone is unnecessarily belligerent (I could be reading into what you wrote though).

Also - I know people who know people and are pushing for the Docker team to support containerizing FreeBSD jails too. Since the primary work horse right now for Docker is lxc, it should be possible to generalize it enough to include support for FreeBSD jails.

Docker does more than just "jail"; it does for jails what package managers did for programs.


I do not understand "orthogonality", sorry. I think in terms of ldd's output, and it tells me that nothing fundamentally new could be created. It either packages in /usr or stuff in /usr/local/ You might mount /usr as uniounfs - same for many jails, but you cannot undo dynamic linking, at least statically linked ruby is nonsense (but static linking sometimes is a good idea). I also cannot understand "containerization" why a snapshot created by tar is not a container? The assumption that you could reduce /lib is naive. Yes, /usr/lib could be reduced, but only by removing gnome stuff and some fancies.

I have tried to make minimalist but compatible system. It doesn't happen. Things like kerberos, sasl, ldap, pam messes everything up. I ended up with almost exactly the "minimum install". Paraphrasing a bit "dependencies! dependencies! dependencies!" and minimal install of Fedora or Ubuntu Server is almost optimal.


If ZFS Snapshots were deployable to other machines in a network, then it would be really superior. Is that possible? By deploy I mean pushing snapshots into other machine's zpools, migrating them and lastly applying them. So that at the end you have a copy of the other server.

Well there's still no usable (web) gui for managing zfs (tell me what you want, but that IS wha is needed to make zfs more popular).

Would a minimal install of [your_server] into kvm give more rough performance than LXC+Docker?

One downside of Docker & LXC is that applications CAN breakout of the "jail" and affect the host machine. The docker guys told me about that fact and that they will work on that in the future.

After all that fizzbuzz, I'm not sure anymore, maybe chef/puppet/ansible or salt would be a much better solution than dealing with bits "manually". Because that would be the Model Driven Approach.


Would a minimal install of [your_server] into kvm give more rough performance than LXC+Docker? The question is what for? Virtualization and production simply does not match.)

Close to so-called real life notion might be one like this: some guys managed to get funding for an mix of buzzwords they not fully understood themselves, especially an OS design. But after placing Minimum Viable Buzzwords website they got a lot of hype, so, magically, it became a "great idea" a "big thing" being absolutely nothing new and solving no real problems.

There is no problem with installation of packages or tarbals on the same system, through network, on multiple hosts, whatever. As long as it is the same version of the system. There is nothing to fix. It is optimal already and package managers and ports systems doing the job well enough, taking care of dependencies, security updates, restarts, etc.

All the buzz and hype is about people who do does not know and do not want to know how underlying system works, what are the basic ideas, etc. For such guys the promise of a simple interface (they can see it is simple on the site) which requires no thinking or understanding, is, of course, a piece of cake.

This is exactly how we have all the piles upon piles of Java crap. There is no need to think or understand and deployment is easy - just dumb all the shit in the same dir. Now we have all the mess and some smartases talking "JVM optimizations" but forgetting to mention, than a slight change in workload, leave alone the code, will make all their prior assumption invalid. OK, fuck it.


SSD and 10Gb Ethernet are big changes...


Is it not an easy task? Install rvm and run "bundle" and you're pretty much done. Regardless, that's not necessarily part of the scope of this project as far as I can see. That's handled within the container which is managed by Docker and it's configuration system.


I just threw together a really simple proof of concept which uses a shell script to spin up a Docker instance running NGinx, and then register the port that instance is listening on with Etcd. Because it's quite so simple, here it is:

`INSTANCE=$(docker run jonwood/nginx-node) && curl http://127.0.0.1:4001/v1/keys/nginx-nodes/$INSTANCE -d $(docker port $INSTANCE 80)`

You'd then use Etcd's tree listing interface to grab the JSON fragments for each of those NGinx nodes, and configure Varnish or HAProxy or something to hit those backends.

Obviously this isn't anywhere near production ready (for example if one of your NGinx instances goes down there's nothing to remove it from the pool), but so far I'm impressed with what CoreOS is doing.


"for example if one of your NGinx instances goes down there's nothing to remove it from the pool" For now, you can set up a node with TTL and update it. If the server goes down, the node will disappear. We are going to provide a client that will keep long connection with the server.


> (for example if one of your NGinx instances goes down there's nothing to remove it from the pool)

My plan to solve this was to use Doozer with a small daemon in each agent server which would keep some kind of registry of the services running on the machine. I think that could be relatively lightweight and with the ability to broadcast via watches there could be some kind of action attached to a node being removed and added.


From what I understand, it doesn't. You're expected to already have a container with a web server in it, and a container that already has a database in it. The web server container then uses the CoreOS service discovery feature to find out what IP address the database container listens that, and configures itself.

In other words, it does not replace Chef/Puppet. Maybe only for the configuration part, but not for the installation part. You may still want to use Chef/Puppet for creating the container in the first place.


To my mind, the big idea here is that containers will self-configure. This is configuration _inside_ the container, rather than configuration from _outside_ the container (like Chef/Puppet etc).

This means that your Docker container doesn't need to run a Chef/Puppet/SSH configuration agent. It will need some sort of "discover my config and configure myself" process running, which would perform much the same role though.

In other words, the container is "smart" and self-assembles. If you need to add more webserver capacity (for scaling), you just launch more webserver containers; they will register themselves with the load balancer and attach to the database.

Of course, building these smart containers is non-trivial, so I'm eager to see some real-world examples.


What you're saying is exactly what I said in the first paragraph. Containers discover the config and configure themselves. But where do those containers come from in the first place and how to you build them? Where does that container, containing your in-house app, come from? That is where Chef/Puppet may come in.


I was thinking that there would be a generic Rails container (like a Heroku buildpack), that reads the location of your Rails code from the configuration, downloads it, and runs it.

So, just like Heroku buildpacks, you can create your own, and you can even create one with your app baked in; but most people would end up using an off-the-shelf container.

But this is just my interpretation! Your approach may be better, and it doesn't look like CoreOS will care which way you want to do things!


As mentioned, you would need "some sort of process" to config the host -- this is Puppet/Chef/etc. Replacing those is not trivial.

Also, not quite sure I understand config "inside vs outside" being the big idea. You can easily include Puppet manifests and have Puppet run in standalone mode from within the container with no outside access to config. That's not new.


Exactly. A container self-configuring by pulling it's config from an external service is pretty much the same as chef-client pulling down a configuration from the Chef server.

What I want from something like CoreOS is a stripped-down container host that will host stripped-down app containers. I want to package my app in the most-stripped-down fashion possible and deploy it on this stripped-down container host.

The idea of using Chef/Puppet to deploy an app onto a full-blown install of Ubuntu/CentOS/etc. seems like overkill. There's a lot of superfluous crap on that full install that consumes storage, memory, and CPU resources. Having a full OS also creates more attack vectors. If you were an attacker, would you rather root some Rails app on a full install of Ubuntu, complete with shells, compilers, etc., or a box with only the bare essentials required to run that Rails app?

I suppose that an argument for having a full-OS install is to make it easy for things like Chef/Puppet to update the server in situ. Stripped-down containers could make that unnecessary. Imagine an app container so small that it's easier to just blow away the container and create a new one with the freshest software.


>What I want from something like CoreOS is a stripped-down container host that will host stripped-down app containers. I want to package my app in the most-stripped-down fashion possible and deploy it on this stripped-down container host.

If that's all you want then use Ubuntu Server.


That's not stripped down nearly enough for my tastes. I'm talking about the absolute bare minimum to make an app fully function.


Working on a scripting interface to Linux designed so you can configure everything internally to your application (ie it can run as the init process) with no significant dependencies (you can statically link it)[1]. Its not finished yet, but you can configure network interfaces, routing etc. Needs some more examples, build scripts etc...

[1] https://github.com/justincormack/ljsyscall



The minimum install has ~4 processes running. You can only get so much more bare then that.


Puppet could indeed be that process, and certainly replacing it isn't trivial. But I'm excited by the idea that we could replace it, by being able to rely on the configuration being directly available (provided by the OS). We'll have to see what gets created!


What's the advantage of it being built into the OS versus using chef or puppet? There's going to be some kind of process running to handle configuration no matter what. Docker also has Dockerfiles to handle configuration of containers. After seeing this article I looked into config management in Docker and containers and I'm having a hard time seeing why something like puppet/chef isn't the right solution.


You can design your containers/vm to self configure with puppet/chef. I have ec2 instance that you configure by telling it what role it is when you start it and then it has headless puppet scripts which configure it on boot.


Indeed. It will happen. chef and puppet are replicating manual config. You need your app to configure its whole environment.


If you're under the impression Puppet and Chef simply replicate a manual config, you're doing it wrong.

Look at concepts like Puppet's exported resources. The tools to accomplish service discovery have been around (and many of us have been using them) for year. I'm just excited to see the concept finally getting a bit more mindshare via Docker and now CoreOS.


For those utterly confused by this story, CoreOS is for running Containers.

Containers can be thought of as way of packaging an entire runtime environment which is more lightweight and more universally deployable than creating a virtual machine image.

This one slide explains it well: http://www.docker.io/static/img/about/docker_vm.jpg


How does the "appropriate" sharing of bins / libs take place?


The containers can share a file system layer which is mounted read only. Without write access the containers can't interfere with each other and there only needs to be one copy of these shared files on the host.

With something like AUFS (like what docker does) a R/W layer can be placed on top of this allowing the guest container to modify its own files without impacting the underlying shared read only layer.


Nothing is shared at all between containers besides the kernel. Inside a container, apps share libraries and binaries.


Looking at the slide, the Microsoft equivalent seems to be app pools running in IIS.


Brandon from CoreOS here. Check out the ec2 docs here: http://coreos.com/docs/ec2/


Geez, the rabbit hole gets even deeper. This is all great. Docker has been moving at "ludicrous speed" from the beginning and the ecosystem developing around it has been doing the same.

I'm itching to play with etcd also and hopefully it can gain more momentum than Zookeeper or Doozer did.


Is it kind of like SmartOS but with Linux instead of Illumos and without DTrace and ZFS?


I think so. However I don't know if this is meant to be used on bare metal.


We would like to run on bear metal eventually.

Targeting virtualized hardware keeps our testing matrix small right now and makes it easy for people to try it out.


Being unfamiliar with how one creates a Linux distro, can you go into a little more detail as to the difference between targeting virtual hardware over physical hardware?


Sure! There are a few differences:

1) Installation: On physical hardware you need to provide a way for people to initially boot and then install the software on disk. Sophisticated people want something they can boot over the network and script. Everyone that is just trying it out wants a GUI; which means you have to build something that finds disks, helps the user partition them and installs the software.

On virtual hardware there is usually no install step: you just get a VM image and run it. This lowers the barrier to people trying something out.

2) Hardware support complexity: Physical hardware has hundreds of different devices that come in thousands of different combinations. Some of it requires 3rd party drivers and custom configuration at boot time. Supporting everything ever made adds a lot of complexity.

I think it is a bit comparable to the complexity that ORMs have of running on top of 20 different databases and still providing a useable interface that doesn't break.

In the virtual hardware space there are only about 4 sets of Kernel drivers that you need to support all of them and they are generally well tested.

3) Test cycles: It is easy to turn on and off virtual machines for testing and the developer feedback loop is tight for even the tricky bits like our boot code (>60s). However, on physical gear it can take 5+ minutes to test out a single iteration because of copying over the new binaries, slow booting disks, etc.

4) Customer debug cycles: Without easy access to the same physical hardware a user has I have to build a debug image, have them install it, then give me the debug output. This cycle can take days.

Something like the Open Compute hardware can reduce a lot of these pain points. Also, in a given year a majority of servers have similar hardware so you can work on a lot of gear with minimal additions if you curate.


SmartOS does not offer container-based virtualization of linux containers. CoreOS is designed to run inside normal virtualization (KVM/Xen/etc).


SmartOS also offers container based virtualization of SmartOS (zones), which can run inside normal virtualization. It also, just like Linux, can additionally run other OS's inside KVM (native port).

You are correct though that SmartOS does not offer container based virtualization of Linux.

So to answer my own question, I guess.. "yes"


While ZFS and Dtrace are cool, real linux containers are useful to a wider range of people, based on the usage stats.

(Also: zfsonlinux.org works well)


Can someone please explain what a container is? Googling 'container' doesn't seem to give me useful or relevant results.


A Linux machine has the Kernel which deals with the hardware and then a filesystem with everything required to run applications. A container is a Linux machine without the Kernel.

As an example: with containers you can run a database that needs a CentOS environment on the same physical machine that is running an application server that expects a Debian looking environment. All without the performance hit you would take from VMs.


So... it's a chroot jail?


In a way. However, you can run a full machine from init on down and fully isolate networking, process namespaces and resources like memory or CPU.


Like VPS in FreeBSD 10


I had the same question. I was eventually plopped here:

https://en.wikipedia.org/wiki/Operating_system-level_virtual...

"Operating system-level virtualization is a server virtualization method where the kernel of an operating system allows for multiple isolated user-space instances, instead of just one. Such instances (often called containers, VEs, VPSs or jails) may look and feel like a real server, from the point of view of its owner. On Unix systems, this technology can be thought of as an advanced implementation of the standard chroot mechanism. In addition to isolation mechanisms, the kernel often provides resource management features to limit the impact of one container's activities on the other containers."


A pretty good overview of the different types of virtualization (and non) is available on the SmartOS Wiki:

http://wiki.smartos.org/display/DOC/SmartOS+Virtualization


This presentation on Docker and Containers explains it very well, and is well worth the read http://www.docker.io/about/



Indeed, while the name 'container' is apt and helpful for the analogy to standardized shipping-containers, is somewhat generic and has many collisions.

Maybe it needs an extra distinguishing flourish for distinction? Kontainer? Softainer? DContainer?


"LXC" (for LinuX Container) will get you what you're looking for.


So the Linux crowd now reinvents SmartOS... Good, I guess.


Enjoy(ent) the Saab of computing, while it's still around.


I am using docker on my startup. It's a very useful technology. I hope that coreos is as good as docker.

I think that anyone interested on this should check http://smartos.org/. Coreos and Smartos have many things in common. I don't know if the creators of docker/coreos have tried smartos. I think they should. It's always good to check and learn from similar projects.


This could use some big-picture documentation. Does this run inside or outside the containers?


It runs directly under KVM/Xen/Virtualbox/etc. Essentially it is a Linux Kernel, root filesystem and minimal set of services to be able to launch and manage containers.


I'm not sure why there's so much emphasis on "yo dawg" double virtualization. I'd be more interested in PXE booting this thing on bare metal.


Containers aren't virtualized, they're just isolated from each other. That's why they're cool.

Instead of working at the IaaS level for each app, you can have IaaS provide a cluster for containers to run in, and then put up a personal PaaS built out of containers - have more flexibility, spend less money, app folks don't have to worry about infra as much.

And in the case that you're at a scale that you can use real hardware (or a hybrid approach), it's easy to move without the containers having to worry about it.


That is something we would like to do. We started with virtualized hardware because it is easy for people to try out on their own stuff or other providers.

Also, we are just getting started and didn't want to try and support everything day 0. :)


I get the sense this is sort of the opposite of that. It's not visualization. It's programs, isolated from eachother, running on an operating system.


Outside. It boots the server which will run your docker containers.


And it uses Docker as the package format. Awesome :)


does it mean docker includes linux package manager function?


Docker includes facilities for building, versioning, discovery, distribution and updates. So although it's very different from traditional package managers, yes, you can use it to distribute software.


Oh look, irc channel, docs, etc.. oh and you can't get access to it without registering to something, with full details, and maybe get elected.

I would think that this is not that hard to make something similar from any existing distro, with actual build steps, etc.

Ie the "open source way", and not something with probable financial interest.


Sorry for the confusion, you don't need to register for anything.

SDK (roll your own) docs here: http://coreos.com/docs/sdk/ Our EC2 images: http://coreos.com/docs/ec2/ 46 repos worth of source here: https://github.com/coreos

The form is mainly to send t-shirts to the people that try it out for us. We do have a profit motive, because we want this project to be sustainable. Similar to Ubuntu and Redhat.


Is there a way to receive news updates w/o registering or subscribing via e-mail? I don't see a Twitter feed, or even an RSS feed for the blog.


That issue was much discussed on IRC and someone even filed a bug: https://github.com/coreos/coreos-marketing/issues/1

We will have an RSS/Atom feed soon. :)


Thanks for the feedback, here is the feed: http://coreos.com/atom.xml


Is this project part of cloudkick(/rackspace) or a new venture? Website doesn't have much regarding the people/resources behind this.


I'd like a distro with ZFS/BTRFS, LXC and KVM, with a user friendly configuration layer on top. Not necessarily a GUI.

Really, a Linux version of SmartOS. I really like SmartOS but I like to get as much running in Zones as possible and there would be less friction doing this with a Linux kernel.


This isn't what you wanted to hear, but it gets you really close: Install ubuntu, lxc, virt-manager gui, and zfsonlinux.


Very interesting; can this complement Flynn? (https://flynn.io/) or is it in lieu of? Also, can it run on Open Stack?


It is a complement to Flynn. CoreOS with "git push to deploy" will be a great combination.

CoreOS runs fine on top of Xen, VMWare and KVM today. So, a CoreOS image should boot fine under OpenStack. Send me an email[1] if you want to try running it under a local OpenStack.

[1] brandon.philips@coreos.com


Fantastic! I will use this alongside Flynn, on RackSpace public cloud servers. Appreciate the email address, thanks.


In these embedded and HPC like applications there is a significant advantage gained by having the right kernel flags (Preemption, etc).

I would like to see this distro build its kernel from source for most or every installation.


If you like the idea of CoreOS but don't like the idea of using Docker containers, check out bedrocklinux.org.


Yowza, I keep being impressed by the alacrity with which Docker based ecosystem components keep popping up.


I am super excited about this as I am doing OpenStack deployment automation. With this - I can automatedly deploy all the way out to the app on bare metal at scale extremely leanly.


Has anybody tried to get this running on Linode?

Sorry if that's a n00b question, I'm still fumbling my way around the (ever-growing) virtualization / devops landscape.


This looks awesome; any info on the whos behind this?


Security-wise is containerization safer than standard operating systems? (besides being relatively new and unexploited)


Is docker required/prepackaged? I'd prefer to use vanilla lxc/dhcpcd.


From documentation looks like it is only prepackaged, you can use it or not. Also you can build CoreOS without Docker.


Ah, yet another awesome thing built on top of Gentoo.


We're based on the ChromeOS SDKs... which use emerge to build the binaries required to assemble the distro. You can think of emerge/gentoo as the toolchain used to build all the binaries. We also pull base system packages from upstream portage, then compile them all together in out image.


Has this got anything to do with Tiny Core?


if you're core (totally stripped down os) why do you provide a discovery service I'm going to override?


Ready meals, yeah?)

I do remember times when there were essentially two choices - Debian or RH. There was also Suse, but the madness of making everything look like Netware, with standard, classiesc UNIX tools replaced by some home-brew programs with dozens of parameters nobody knew. It died long ago, thank god.

The advantage of Debian was that it was de-facto standard academia linux. Which means more-or-less stable and well tested, while some designs were (and still) lame. apt is such a lousy mess compared to RPM.)

Then the wave of migration from proprietary UNIXes to cheap Linux systems began, and RHEL flourished, being the OS of choice if you wish to run Oracle or Informix (the second was very impressive and still is). RHEL at that time was actively developed, well-tested, and even went through a painful transition to NTPL.

Then good people made CentOS from RHEL's sources and nowadays it is still default choice for some stable, but little bit lagging behind the popular distros Linux (it is still on 2.6.x kernels)

Then was the raise of Ubuntu. Well, it is popular, which almost never mean good.) Nevertheless for the wast majority Linux = Ubuntu. Leaving aside the crazy habit of incorporation of any new shinny crap invented by freedesktop guys, such as various init, management and settings "services" it is quite stable, and well-tested, indeed. Btw, comparing to the glorious days of 2.4 to 2.6 migration, or that NTPL stuff, there are almost no problem with core libraries and tools.

So, does anyone need a new distro? My answer is NO. It is quite easy to reduce CentOS or even Ubuntu (or Fedora, which is also infected by systemd madness) to a minimal and stable set of packages. All you need to do is exclude all Gnome-related stuff with dependencies, keeping image and fonts manipulation libraries, and X11 libs to be able to recompile popular packages.

The key idea here is begin with already many times tested sources, such as CentOS .srpm (got through tests by two separate teams) or Ubuntu's packages, cutting off unnecessary dependencies. Then you will have compatible and well-tested OS for containers or whatever else sales people call the banal para-virtualization.

Setting up your own yum repository is a matter of few hours, Debian packaging is more messy, but manageable. This is what sysadmin's job all about.

Btw, vendors such as Amazon already have done this job, so if you hate system administration (which is a sign that shopping might be a better future ,) just re-use these images - it is much better than some new "core OS".

The so-called "minimal install" of Ubuntu is also fine, and all you need to do is re-compile important packages, such as MySQL the way you like it and place them to your local repo.


I think it is worth mentioning that not even Ubuntu is its own distro in the sense you seem to be talking about it; it is a derivative of Debian, and the majority of the packages are still maintained upstream.

There is also alot of room for new types of operating systems. CoreOS seems to be a operating system design that is not present in any other distro that I am aware of, and definantly not stripped down Ubuntu.

Another interesting OS design I am aware of is NixOS[1], which features a purely functional package management system.

For the typical use cases, the answer is, as always, stick with the tried and tested. However there is still plenty of unexplored space that may be superior for specific domains, and might even become superior (or more likely influence) the common case solution.

[1] http://nixos.org/


If vendors does not support your os and do not maintain packages for your os (which means compile-test-release cycle) it is the same as if it does not exist.

The curse of FreeBSD is that so-called Linux vendors doesn't take trouble to support it. There are, for example, no support for Android SDK for FreeBSD, while, being mostly Java-based it is not that difficult. The emulator and driver for debugging seems like a complicated thing, but it is not that complicated.

So, when Google or even Percona will do packaging for you, then one could say that an OS makes some sense.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: