
Linux Hackers Rebuild Internet From Silicon Valley Garage - smanuel
http://www.wired.com/wiredenterprise/2013/08/coreos-the-new-linux/
======
peterwwillis
Applications are like cogs in a giant piece of machinery. You can set up the
machine to run a particular way, and even have parts of the machine
independent of other parts.

Eventually parts get old and break down, or a flaw is found. This part needs
to either be replaced or upgraded. With slotted services (CoreOS), you can
replace an individual part and only affect that part of the machine, as long
as it's not integral to the entire machine.

But the machine is complex. Sometimes you have to change how fast output shaft
spins, or the gearing on a transmission. Or perhaps some other part of the
machine's operation has to change, and that impacts this part because they
connect to each other through more gears and pulleys. (comparison: API/ABI
changes, database changes, network or protocol changes)

All slotted services do is provide momentary independence. They do not reduce
overall support, and they only ease maintenance for that particular service.
All other complex facets of server and service maintenance remain the same.
The features of CoreOS - service discovery, systemd, a minimal OS, integrated
deployment, etc - can all be provided with traditional linux distributions.
CoreOS doesn't do anything new or difficult.

On top of that, by using such a specialized system to run your apps, you lose
all the flexibility of having a full linux OS to troubleshoot and debug from.
You now have to rely on them building on all the components that already exist
in regular Linux world, like debuggers, tracers, sniffers, profilers, etc.
You'll have to slip all that into your application deploy to troubleshoot a
weird one-off bug. And forget about ever having a service contract that
requires a supported OS like RHEL or Ubuntu.

This is a product designed to make them money via service and support
contracts. In that sense, they may be successful. But as a sysadmin I know
there's nothing this provides that I can't get from existing open source
tools. Rebuilding the internet? More like repackaging.

~~~
vishvananda
This is an insightful post. Deploying production applications in containers
with an auto-updating kernel underneath still has to be proven in the real
world. I do want to quibble with one point, however:

"On top of that, by using such a specialized system to run your apps, you lose
all the flexibility of having a full linux OS to troubleshoot and debug from.
You now have to rely on them building on all the components that already exist
in regular Linux world, like debuggers, tracers, sniffers, profilers, etc.
You'll have to slip all that into your application deploy to troubleshoot a
weird one-off bug."

CoreOS is a full fledged linux, and since applications are running in
containers, there is no reason you couldn't use debugging tools on the host.

Most production services have strict controls anyway, so it isn't like it is
common practice to log in to a production database server and do apt-get/yum
install gdb and start banging away.

~~~
peterwwillis
The complete lack of any detail about how the system actually works may have
confused me. I assumed they packaged an application with its dependencies and
deployed it as one big piece, similar to (or using) LXC.

From what I understand about LXC, you have to create a chroot environment for
your service to run in. This means installing applications in the chroot
environment in order to use them from the application. For various types of
debugging/troubleshooting, this may be necessary, as the resources of the
environment and its behavior may be (read: are guaranteed to be) different
from that of the host OS.

I don't know what kind of environments you work in, but "strict controls" go
out the window when the production site is randomly going down and you're
losing millions of dollars in revenue. When all hell breaks loose, you dig in
your heels and debug the app server while it is crashing, with a developer
sitting next to you and three fuming managers behind your chairs. In this
scenario i'd rather have a plain old fat-ass Linux distro than a clunky
"minimal" container manager.

------
btipling
Core OS is awesome. The problems it is solving (discussed previously on HN see
links others have posted) are needed. This is the kind of idea that's really
going to change deployment and managing distributed applications and services
for the better. This is really important for where I work, because we want to
be able to distribute our application to others, and I think we'll be able to
do this with CoreOS. It is something we are still investigating seriously,
because working with enterprise level companies we need some option that
doesn't exist yet that works with our infrastructure. For many large
companies, web based SAAS doesn't work with their security requirements.

And really, this time around there are a great batch of companies featuring
some of the Cloudkickers/Rackers who worked on CK (polvi, ifup) and Floobits
(ggreer, kansface) who I know are dedicated to meaningful technology. You
can't expect MSM to get into the nitty gritty, but I found this article to do
a good job to import the benefits one can derive from CoreOS in a way that is
more accessible to non-engineers.

------
theboywho
I like what the CoreOS guys are doing and I don't mean to be mean, but this is
clearly a sponsored post. No technical details, just plain dream-marketing
bullshit.

~~~
yapcguy
Yesterday Wired published a similar puff piece for App.net.

Does anybody know how much it "costs" to have a sponsored article?

~~~
nijiko
Cheap, especially if you know people.

------
shykes
For those of you familiar with Docker ([http://docker.io](http://docker.io)):
CoreOS is a linux distribution designed to get your machine from zero to
Docker as quickly and reliably as possible. It is stripped down to the bare
minimum, since its primary job is to run Docker, which itself has almost no
dependencies. As a result CoreOS boots very quickly, and is hard to get in an
inconsistent state because it has less moving parts.

I think it's a great example of what becomes possible with the Separation of
Concerns allowed by container-based deployment. With docker containers as the
standard "glue", the components in your system can be less tightly coupled,
and as a result they can be simpler and more reliable.

And because each component can be chosen independently of the others, it's
easier for new alternatives to be adopted, because you don't have to rip out
the whole system to try them out. For example, in a docker-based system, you
could easily try out CoreOS on a few machines alongside your existing Red Hat
or Debian setup, because they can all run Docker. You could try StriderCI or
Buildbot instead of Jenkins for continuous integration, because they can all
build docker containers. You could try Nginx instead of Apache for some of
your applications, because they can all be bundled into a docker container.
And so on.

TLDR: containers lead to better interop between tools, which leads to more
innovation and competition on the tools, which leads to better tools, which
leads to happier engineers! :)

------
hga
Classic MSM: refer to an offsite item, in this case here, without linking to
it (just the site):

 _Show HN: CoreOS, a Linux distro for containers_ :
[https://news.ycombinator.com/item?id=6128700](https://news.ycombinator.com/item?id=6128700)

Also this, which has some general discussion:

 _CoreOS Vagrant Images_ :
[https://news.ycombinator.com/item?id=6149638](https://news.ycombinator.com/item?id=6149638)

~~~
throwaway1979
I'm a bit confused about containers. I know the ideas are quite old (keep
hearing about the old Berkeley Jails and chroot). These days, I'm hearing
about LXC containers, Warden, Docker, ...

They way I'm seeing it is that various projects are making it easier to use
containers and adding more function. Is this correct or am I missing
something?

~~~
nickstinemates
That's correct!

LXC offers primitives for containerization with a bit of sugar which projects
like Docker leverage to provide a much lower barrier to entry for most people.
Today, you lose some flexibility due to lack of options, but it appears that
should only be temporary.

Depending on context, Docker containers can be thought of as a fat binary for
your application. Meaning, as long as a Docker Host is running, your
application can run without any additional dependencies. It's the exact same
environment your application is running in no matter where you are. The cool
part about the Docker internals is that pushing an update to another Docker
Host is incremental. Very similar to a `git diff`.

------
Swannie
It's taken me too long to realise that this really is just LXC or similar
(OpenVZ), plus some nice directory service akin to JNDI.

Analogy wise, a Linux version of a Java Application Server cluster? EAR or WAR
files are not meant to contain their config, but look it up via JNDI lookups.
They can be stopped/started/updated in place. They share common libs, and
bring their own along with them. There are standard and vendor specific tools
to monitor performance, provide messaging, manage resources, etc. etc.

Edit: If that's what it is, I say HURRAH!!! because I've long felt that
virtual machines were a failure in our ability to get the OS level design for
dependency/config management "right".

------
binarycrusader
The article seems a bit hyperbolic with captions like this one:

    
    
      ...are building a new kind of computer operating system
    

That's more than exaggerating slightly. If they were talking about a
completely new OS architecture, I might accept that statement.

So much fluff here and so little technical detail.

------
bifrost
I'd say the article is poorly titled, but what Polvi and crew are doing is
pretty important.

Linux has always had a big problem of bloat in the OS, when it reality the OS
should be stripped down to a base level (the Core as it were) and then
everything else should be managed as an addition. This is basically what most
of the BSDs do, and PC-BSD has gone even further with it and basically
"containerized" OS additions.

Linux needs this, I'm really happy to see this being done.

~~~
ciupicri
What bloat? No one is forcing you to install anything, so your OS can be
stripped down to a base level. From there on you can compile and install the
application using whatever dependencies you like. The problem is that no one
wants to do this step manually, so we're back to the issue of packaging which
CoreOS doesn't seem to solve.

~~~
bifrost
> What bloat? No one is forcing you to install anything

The last time I did a "base" install of CentOS and Ubuntu, it installed a TON
of garbage, almost none of it relating to the actual operation of the machine.
The "base" install also missed a variety of tools that I'd consider essential
to a "base system". Granted I've only been a unix/unix-like system user for
around 20 years, but it seems like things are bloated when they get to that
point.

Before people go off fudding/etc, I use a variety of operating systems daily,
and I've written my own installer for 2 of them so they'd be better off
FOADing....

~~~
drill_sarge
When I do a "base" install I use Debian netinstall .iso and there's zero
"garbage".

~~~
bifrost
There's plenty of Garbage, but perhaps less so than stock Ubuntu or Fedora.

Its also Debian so you're unfortunately likely to be fairly behind the current
patchlevel thats required for a maintained/modern OS. Not that Debian is bad,
but its often woefully behind. Not that long ago I discovered how hard it was
to get CentOS up to spec on "the new stuff", it was rather frustrating as
well.

------
scarmig
Is this just a re-implementation of SmartOS using the Linux kernel?

~~~
stass
Yes, essentially. An attempt at.

------
throwaway1979
The article mentions someone giving a tutorial on linux drivers and someone
building a USB thermometer(quite tangential to the story). I've been
interested in this topic for quite a while but have found it a bit
inaccessible.. can anyone point at a good book or simple project I can work
with to develop my chops? I'm pretty handy with Arduino and RasberryPI. But
I've never written any code in kernel space and want to satiate my curiosity.

Thanks in advance if anyone replies.

~~~
philips
LDD3 is good but slightly out of date.
[http://lwn.net/Kernel/LDD3/](http://lwn.net/Kernel/LDD3/)

You can also find lots of skeleton code in the kernel that can help you learn:

Filesystem, ramfs:
[https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/fs/ramfs/inode.c)

USB device driver, skeleton:
[https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/usb/usb-
skeleton.c)

PCI, probably UIO:
[https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/uio)

Video, skeleton FB:
[https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/video/skeletonfb.c)

------
read
If you wanted to rebuild the Internet, which runs largely on Linux, you may
want to rebuild Linux first. Or throw it out and start with something else.

    
    
      Why can't the Internet be just *one* machine?

------
oscargrouch
im also forking chrome to create a new sort of browser (dont know if we can
call it that way) , not from my garage, but from a room turned into a office
:)

I would definitely like to get in touch with this guys if possible, maybe to
exchange some experience since we share the same codebase even if its for
different purposes

