
CoreOS and the the Future of the Internet - stresler
https://www.packet.net/blog/coreos-and-the-the-future-of-the-internet/
======
notacoward
"Consensus is a problem that has plagued computer science phd’s for literally
decades"

No, not really. The Paxos paper was in 1989. One might reasonably argue that
Raft is an improvement, but if so it's only an incremental one - not a cure
for a "plague" as the author suggests.

The same obliviousness to actual history and reality plagues (heh) the rest of
the article as well. Saying that package managers were invented because
"compiling code from source became cumbersome" is way off base. Tell it to
Gentoo, which has long had a package manager to manage dependencies but still
compiles locally. Also, the "ubiquitous overlay network that comes attached to
all virtualized platforms" seems conspicuously non-ubiquitous except in the
VMware universe.

I don't want to seem gratuitously negative, but the author's argument is
incoherent. The conclusion that we need something like CoreOS might be
correct, but counterfactual axioms do not support that (or any) conclusion.
Containerization is not the solution to configuration-management problems. In
fact, technologies such as Docker rely heavily on tools and lessons from the
configuration-management space. Where they have abandoned that heritage, they
have often left huge gaps (e.g. security). The evolution we're seeing has less
to do with previous approaches not working than with combining and applying
those approaches in new ways precisely _because_ they have become
robust/mature enough to do so.

~~~
jfoutz
Part time parliament was written in 1989, but not published till 1998.
Probably not fair to claim to much recognition of its value till the mid
(late?) 90's.

[http://research.microsoft.com/en-
us/um/people/lamport/pubs/p...](http://research.microsoft.com/en-
us/um/people/lamport/pubs/pubs.html#lamport-paxos)

~~~
notacoward
It was published well before 1998, as SRC research report #49.

[http://apotheca.hpl.hp.com/ftp/gatekeeper/pub/dec/SRC/resear...](http://apotheca.hpl.hp.com/ftp/gatekeeper/pub/dec/SRC/research-
reports/abstracts/src-rr-049.html)

There were also other papers based on that before 1998, including the
Petal/Frangipani stuff. That something already so well known wasn't "properly"
published for nearly a decade was a bit strange, as Lamport himself explains
in the page you cited.

Even if that weren't the case, something that was published in 1998 still
refutes the "plagued for decades" claim.

~~~
jfoutz
Interesting. I'd thought that "The Paxos algorithm for consensus in a message-
passing system was first described by Lamport in 1990 in a tech report that
was widely considered to be a joke [1]" was pretty much the standard
interpretation.

Perhaps i'd read the Turing award writeup to literally. "Unfortunately the
setting as a Greek parable made the paper difficult for most readers to
comprehend, and it took nine years from submission to publication in 1998. But
the 1989 DEC technical report did get noticed. Lamport's colleague Butler
Lampson evangelized the idea to the distributed computing community [19].
Shortly after the publication of Paxos, Google's Chubby system and Apache's
open-source ZooKeeper offered State Machine Replication as an external,
widely-deployed service."

It's a little odd, nobody seems to have actually used it before oh, say 2000
or so, but maybe it was in secret proprietary kinds of stuff.

[1]
[http://www.cs.yale.edu/homes/aspnes/pinewiki/Paxos.html](http://www.cs.yale.edu/homes/aspnes/pinewiki/Paxos.html)

[2]
[http://amturing.acm.org/award_winners/lamport_1205376.cfm](http://amturing.acm.org/award_winners/lamport_1205376.cfm)

------
vezzy-fnord
_We don’t need more options. We need fewer._

 _We’re moving past the days where we take an operating system off the shelf
and plug things into it until it can serve our application and into the days
of purpose built OSes that push software complexity into the realm of software
engineers._

These two statements contradict each other.

On top of that, it isn't even anything exciting, per se. Special-purpose and
tailored OS have been the norm for plenty of fields, though I suppose not web
application deployment.

And of course, the obligatory false dichotomy about init systems:

 _Just because something like systemV has worked very well for 32 years
doesn’t mean it should not be torn apart and re-invented._

"Full reliance on the init suite" should rise some alarms about fault
tolerance, honestly. But I don't know.

~~~
forgottenpass
_These two statements contradict each other._

Well, if you read carefully the "we" is operations, and they're just jiggering
things around enough to throw the complexities over the wall to development.

I'm with you that this (at least for the Core OS case) isn't terribly exciting
conceptually. The exciting part is someone is doing the heavy lifting to
introduce this level of flexibility into Linux. It brings the highly tailored
OS approach to users that haven't had a business case to do all that work in
house, and builds the ecosystem around the idea in a way that doesn't happen
with one-off implementations.

~~~
Terretta
Don't have to read too carefully. It's stated explicitly in his closing
remark.

------
felixgallo
"Just because something like systemV has worked very well for 32 years doesn’t
mean it should not be torn apart and re-invented."

Actually, that's exactly what it suggests, unless there's a strong reason to
do otherwise. Note that terribly written desktop linux wifi init scripts do
not qualify for the latter.

------
gtaylor
We're still a ways away from having to make our choice of container
orchestration framework(s), but it seems like it's getting to be an
increasingly difficult decision to make. There are a lot of great pieces that
can be used independently, alongside one another, or on top of another.

For example, Kubernetes + Mesos, Kubernetes + CoreOS, Mesosphere + CoreOS,
Swarm + whatever else, etc.

I guess it's a good problem to have, but I'm looking at this as a question of
what the jigsaw puzzle is going to look like for us in six months to a year.
I'm not sure I buy the notion that all of this reduces complexity, it just
shuffles it around.

~~~
sanderjd
Yep, I'm interested in this model, but finding it very hard to effectively
evaluate all the permutations. This is hugely exacerbated by the fact that
nothing really seems complete, so pros and cons could either be due to
different design decisions, different priorities, or different levels of
maturity, and it's very difficult to tell which.

~~~
sfilipov
You don't need to evaluate all permutations, at least not in detail as long as
you know the main selling point of each. Go for the setup that has the
smallest number of components yet is complete enough for your specific case.
For me - it's a single CoreOS VM, no etcd or fleet, no higher level
abstractions like Kubernetes or Mesosphere. Most people who need to get a
bigger system running will need to take advantage of etcd and fleet + multiple
VMs but can still decide that Kubernetes/Mesosphere is too much. And then
there are the people who need the full package.

~~~
tracker1
For me, it was three instances with dokku-alt, and deploying exactly the same
to all three... fronted by two (load-balanced) caching/ssl-terminating nginx
instances. Using my cloud provider's table storage for configuration (caching
in an app module, that clears it's cached results every 5 minutes)...
realistically my configuration doesn't change _that_ much.

------
Animats
To support containers, is Linux necessary? Would it be sufficient to run Xen
under a container loading system? That would simplify things enormously.
Server farms need very few drivers. They don't need a command line
environment, daemons, development tools, etc. Those are all elsewhere.

Administration tools don't run on the target platform any more. They're either
devoted to creating containers or managing container instances on the farm of
target machines. The server side could get much simpler.

~~~
pjmlp
It is the revenge of the mainframes and micro-kernels.

Also a return back to the days where the language runtimes were the OS.

~~~
agumonkey
I love how layer shape-shifts up and down like this.

Maybe I'm off but it reminds me of 'the unix programming environment', it was
a programming system, binary processes as functional primitives, PATH and
environment for late binding, files as memory and communication mean.

"The language of the system" to quote clojure author.

~~~
pjmlp
I feel more that way in regard towards the Xerox PARC systems, but I get your
point.

------
beat
The industry swings back and forth between coarse-grained and fine-grained
solutions, centralization and decentralization, development and operations,
depending on the currently available technologies.

Build a coarse-grained system, and you find excessive coupling. Build a fine-
grained system, and you find excessive moving part interactions. Which kneecap
gets the bullet?

------
jilted
Linux is increasingly become more 'appliance-like' and I think the steps taken
here are actually positive. Too many shops flat-out ignore the OS compoment of
the stack. This will at least force integrators to think about the OS more.

For what it's worth, this is just SmartOS re-implemented in Linux (though
lacking certain features of SmartOS), and funded by Google with apparently
more flexibility on where you can run this.

I personally cannot wait for the day when I am able to treat Linux like a dumb
appliance everywhere, running atop more purpose-built systems.

------
gtirloni
_It gives you the bare minimum you need to schedule and start containerized
workloads and leaves the rest to however you want to build and deploy your
software_

Isn't this a bit of pushing the problem somewhere else? It seems that
complexity will be "load balanced" to all the containers owners, which is
great for infrastructure owners but does it "solve" the underlying problem (as
it's stated in the article)?

~~~
abhay_agarwal
Container-friendly OSes are definitely the future. While CoreOS isn't the only
one (Atomic, Photon, Snappy), there's very little need for most OS features
that we traditionally expect in servers when lightweight containers wrap all
our applications, and cluster schedulers manage failover.

As for load-balancing the problem, there actually is added complexity in new
architectures because we add a layer of orchestration on top. However, that
layer on top is becoming more and more necessary, so it makes sense to move a
lot of functionality out of the OS layer.

If you want an example, think about how modern disk systems expect JBOD
instead of RAID. When there is innovation on top, the underlying system can
become simpler.

------
beachstartup
i've never once had a "page"-type problem (i.e., getting woken up or
interrupted) that could have been prevented or solved by containers. usually
it's the result of poorly written/tested code, malicious traffic, or a
personal fuckup by someone like, turning off the wrong thing by accident.

i really do wonder what this guy is talking about. containers have a set of
use cases but eliminating complexity or downtime sure as _hell_ aren't among
them. i don't even know how you can say that with a straight face.

------
digi_owl
I really wish this didn't give the Linux community a bad case of "i have a
shiny new hammer, so all my problems are ipso facto nails!".

------
cpeterso
What is CoreOS' relationship with Google? Wikipedia says CoreOS is a fork of
Google's Chrome OS, which is itself a fork of Gentoo. Chrome OS seems like an
odd choice for a server product. Why not use something simpler?

CoreOS received funding from Google Ventures and uses Google's Omaha updates
distribution system and Kubernetes cluster management utility.

~~~
philips
co-founder of CoreOS here.

We chose to fork from Google's ChromeOS early on because they had a nice SDK
setup for taking a set of Gentoo ebuilds and taking them through a pipeline of
cross-build, image build and then OEM build steps. Because CoreOS is more like
an embedded Linux than a normal general purpose one I was evaluating different
options like Tizen, OpenEmbedded, etc but I arrived at forking the ChromeOS
SDK because ebuilds were familiar and the build pipeline was close enough to
what we needed.

Since forking we have essentially diverged in every possible way a Linux can
diverge: different Kernel, different init system, different bootloader, etc.
So, even though technically it is a fork, and arguably the Wikipedia should be
updated, it was a hard fork to get a set of build tools.

If you are interested you can see our SDK guide here:
[https://coreos.com/docs/sdk-distributors/sdk/modifying-
coreo...](https://coreos.com/docs/sdk-distributors/sdk/modifying-coreos/)

~~~
cpeterso
Thanks. I didn't realize it was a hard fork. I guess you've got to start
somewhere. :) And as an OS company, you would want full control of your OS
distribution and not depend on upstream's decisions.

------
lsllc
What about ARM? Seems Ubuntu Core (Snappy) is way ahead of CoreOS.

------
juliangregorian
So CoreOS is cool because it's minimal? Serious question, why is CoreOS a
better container host than say, boot2docker?

~~~
robszumski
(CoreOS employee, would love to hear other opinions)

Three main things:

1\. everything that systemd provides (dependency chain, common logging, cgroup
management, socket activation, timers)

2\. designed for sane management from 1 to 1000s of machines. Easy PXE
booting, etc.

3\. self-updating (you can turn this off!). State of the art for servers is
"never touch it again" or a multi-day fire drill to patch hundreds of systems
for the next heartbleed.

Basically, CoreOS should remove a lot of the tedious upkeep tasks from your
ops team so they can focus on moving your infrastructure forward.

------
spdustin
Anyone else read this title and think "that seems awfully self-important"? And
walk away from the article wondering how the future of the whole entire
INTERNET is at stake?

