
The End of the General Purpose Operating System - grey-area
http://www.morethanseven.net/2016/11/05/the-end-of-the-general-purpose-operating-system-as-it-happens/
======
erikpukinskis
Ah, the grand cycle of layering and integration. You build a machine that does
a thing, modify it to do another, and another, and another, until eventually
someone says

We could have a layer that did all of this!

The layer is introduced, applications are rewritten to target the layer, and
people slowly lose touch with what the world looks like beneath The Interface.

And some things that should not have been forgotten were lost. History became
legend. Legend became myth.

As people target The Interface, it grows into a more and more general purpose
machine. Layers of indirection build up. Once simple tasks must propagate up
and down a big stack.

Until one day, someone stumbles across the layer beneath. Gosh, the underlying
system does 99% of what we need, out of the box. Why do we need this layer at
all? We can do most of what we need with a couple single purpose tools. And
the rest of the complexity can be taken up by the application layer. I don't
mind doing a little more configuration there if I can get a huge performance
and complexity win.

And the developers, frustrated with how big and bloated their layers have been
feeling, flock to this new simple tool, and they port their applications, with
a little extra boilerplate and big complexity wins. And then they port
another, and another. Until someone realizes

We could have a layer that did all of this!

And it might seem pointless, when I write it up in this snarky way, but it
absolutely is not. This is the process by which we discover the fundamental
building blocks of software. Each time we add a layer, and each time we take
one away, we learn something new about what information is. I love it.

~~~
urza
Could you please give some examples of "removing layers"? I have feeling like
we only add new layers of abstraction on top, but removing them?

~~~
bartwe
The Vulkan api comes to mind

------
Animats
This makes sense. The server OS underneath ought to be a lot smaller than
Linux or Windows. No interactive logon support, no display support, no printer
support, no hot plugging support, few drivers, and no battery management. And,
most importantly, few changes. The OS underneath ought to be simple enough to
be installed for the life of the hardware.

The real "operating system" is the container orchestration system.

~~~
Hello71
> No interactive logon support

technically doesn't come by default with Linux (the kernel) anyways

> no display support

can be configured out. I'm pretty sure you can even configure out the whole
TTY subsystem if you want, so you can't even use a serial port to debug your
madness.

> no printer support

who still uses parallel ports? for anything else (except usblp which basically
never works) you need CUPS anyways

> no hot plugging support

last I checked, hotplug is mandatory on x86 (but only for certain components,
so you could configure out say USB support if you wanted)

> few drivers

sure, you can compile your own kernel if you want

> no battery management

I don't really know what "battery management" means.

> The OS underneath ought to be simple enough to be installed for the life of
> the hardware.

you can _already_ try this with Linux. then you run in to problems like "what
happens when there's a vulnerability in Xen", because
[https://marc.info/?l=openbsd-
misc&m=119318909016582](https://marc.info/?l=openbsd-misc&m=119318909016582).

~~~
johncolanduoni
> I don't really know what "battery management" means

I assume he means "power management" (i.e. ACPI on x86). You can run the
system without taking control of the ACPI hardware, but this is a _terrible_
idea. It'll affect your ability to run the CPU properly, even assuming you
don't care about power usage (and large server deployments sure do!). Not to
mention all the nasty bugs and problems you'll run into because the hardware
wasn't designed to run for long periods of time without ACPI active.

~~~
asadjb
But would that matter, given more and more of our machines are running on a
hypervisor of some sort?

I know it's almost mandatory for the base machine running on _actual_
hardware, but given that most of the servers we use now are just virtualized,
how bad would removing ACPI support be? Also, would it be useful to remove
that?

~~~
johncolanduoni
Not really. You'd have to come up with your own standard for conveying
configuration information and shutdown/restart at least (for this thin guest
I'm guessing you wouldn't mind losing suspend). I believe you'd also lose
memory hotplug (although I don't think you'd lose memory ballooning) unless
you implemented that too. That's a new standard, and new code for your guest
_and hypervisor_.

Also I remember reading something about some hypervisors using the guest's
decisions with regards to CPU power states to inform the hypervisor's setting
of the same on the real hardware. I'm not sure if this is implemented in
mainstream hypervisors, or if it's really useful, but that's another thing
you'd lose.

------
pjc50
This seems like another point on the divergence from the traditional security
model. In the 70s, the software on a computer was entirely controlled by the
system administrator; the software was presumed secure and the threat was from
the users. Users needed to be partitioned. In the present day, there's only
one user who is also the system administrator, but the _software_ is the
threat to itself and others.

True for both server-containers and mobile OSs.

~~~
marssaxman
Yes! This is an argument I've been making for several years now: the user-
centric security model is obsolete and unhelpful, because most computers have
either one or zero users in the traditional sense. The whole unix-style
reduced-privileges-plus-sudo approach that Windows and MacOS have copied is a
nuisance which doesn't really solve the problem; the permissions systems in
Android and iOS are a little closer to the mark. Qubes is a good step forward.
The real problem is exactly what you said: we can't fully trust software, even
software which is not explicitly malicious, because software can be exploited,
and because software authors sometimes want to be "helpful" in ways we'd
really rather they weren't.

Every piece of software should run within a sandbox, and the human user should
have complete control over which resources are or are not exposed to each
sandbox; that's the future operating system I want to see. I did some
exploration around the idea of doing this with hypervisors and unikernels
([http://www.github.com/marssaxman/fleet](http://www.github.com/marssaxman/fleet))
but it got to look too much like rewriting all the software in the world.
Containers are less elegant, but seem to be a more practical way of moving in
the right direction.

~~~
yellowapple
The user-centric model is actually still relevant if you use them as the means
to software isolation. This is a major aspect to the security models of
OpenBSD and (last I checked) Android, and is generally effective (unless
you're deliberately subverting it, as is common on "rooted" Android devices).

However, said model still has connotations of a specific actual user, which is
no longer entirely accurate in such applications of that model. It'd be nice
to have a sort of "subuser" system where - within, say, the user for my own
desktop account - I could further divide the software I run into "users" for
things like Firefox or Spotify or what have me.

Basically, I'm in agreement that everything should be sandboxed. We have the
technology to do it, and in fact have had the technology to do it for decades
(maybe not quite as well as we can do _now_ , but confining daemons to their
own users has been possible for a long time).

My own dream system would be one where every "package" for my operating system
is a filesystem image with `/bin`, `/lib`, `/etc`, and possibly `/var`. One of
these packages would provide the root filesystem with a microkernel and the
minimum supporting libraries and executables required to get a container-
oriented `init` equivalent running; then, `init` would spin up each service in
complete isolation by spinning up `chroot`s or something with various packages
union-mounted on top of one another. One of these services could be for a
graphical login, in which case said service would spin up another isolated
container of sorts for my login session, and inside that I could run
applications built up from the same union-mount approach with the same sort of
isolation.

Plan 9 From Bell Labs is probably the closest thing to that ideal world.

~~~
fao_

      However, said model still has connotations of a specific actual user, which is
      no longer entirely accurate in such applications of that model. It'd be nice to
      have a sort of "subuser" system where - within, say, the user for my own desktop
      account - I could further divide the software I run into "users" for things like
      Firefox or Spotify or what have me.
    

I think you might be able to use groups for this.

~~~
falcolas
In a fashion, you can also do this with sudo -u

    
    
        sudo -u ff_user firefox 
    

This way Firefox is run as a separate user within the current user's session,
with all that entails. Of course, that would be a serious pita for your
average user to setup.

~~~
fao_
I guess you could script a lot of it to happen automagically for a subset of
applications.

------
tony-allan
There is ample evidence that we cannot write software that can be installed,
run and updated in a simple and secure manner. Software is too complex with
too many dependencies. Hackers are are better than us almost every time -- and
once in the can bounce around at will.

I just want to run my own containers or those written by others on commodity
services from Amazon, Google or Microsoft. Or even spin up my own environment
if it suits me.

I want to be able to install and remove software quickly, completely and
whenever I want. I especially want to be able to install software I don't
fully trust with no fear of consequences. I want to be able to install
multiple versions and even multiple instances.

I want my PC (Mac/Linux/Windows) to be a thin hypervisor with everything in a
container, including IO and UI.

I still want open source projects on GitHub that I can fork and contribute
changes to at will.

I want containers to be small and ultra simple.

All of this means that my "OS" only needs to manage containers and resources.
All traditional elements sould be in their own container such as logging,
authentication, window managers.

~~~
taeric
You seem to be implying that containers are somehow immune to hackers...

Truth is, it is relatively easy to keep a machine secure. Just not if you want
it to be a convenient machine.

You think you want containers all isolated from each other. Except for the
uncountable times and ways you want to exchange data between them. This, at
the least, includes the data of who you are, as the user.

~~~
johncolanduoni
> Truth is, it is relatively easy to keep a machine secure. Just not if you
> want it to be a convenient machine.

So the people who build high assurance systems (which currently require a
_ton_ of work) are all too focused on convenience?

~~~
taeric
Getting work done is a huge convenience. :)

Specifically, remote access is a huge convenience that by itself moves the
problem out of the easy territory.

------
cyphar
I don't really agree (I'm one of the developers of ocid / cri-o). Even with a
container-manager-as-init you still need to administer your control plane
(updates, configuration, so on). Now, you could go the CoreOS way of creating
an "administration 'container'" which bindmounts / to somewhere inside the
container -- that _works_ but I would still consider it to not be any
different from ssh-ing into the machine as root (it removes the need for sshd
and means that you can manage the server through the same tooling you manage
containers with, but the purpose is the same -- give you a general purpose OS
environment you can use to administer things).

Not to mention that your actual containers will still contain general purpose
OS images.

Also, as an aside, cri-o would be a very bad choice for PID 1. It's
specifically designed so that you don't need it to be a long-running daemon
(which is something that Docker cannot do and is actually a very useful
feature that we hope to keep). And you can't have your PID 1 just exit
whenever it wants to.

~~~
justincormack
Even rancher no longer uses container manager as pid 1; these systems being
discussed are minimal not single binary. (Only systemd is putting some
"container management" features into pid 1).

~~~
icebraining
Which container management features is systemd putting into pid 1?

~~~
cyphar
machinectl and systemd-nspawn. Namely, systemd now manages container processes
if you ask it to. My experience with the systemd cgroup handling doesn't fill
me with much faith on this topic.

~~~
icebraining
systemd does manage containers, but from what I can tell, that functionality
is not in PID 1, is it?

------
hoodoof
Am I the only person who thinks that Docker adds unneeded complexity?

~~~
gerbilly
No, you are not the only one :-)

------
theamk
I feel that author is very enthusiastic, but there are some pretty serious
errors:

"... fight between Docker and systemd is inevitable" \-- this fight has been
going for a while (at least a year):
[https://lwn.net/Articles/676831/](https://lwn.net/Articles/676831/)

"... The reason why you'll do this, rather than compose everything yourself,
is compatability. Whether it's kernel versions, file system drivers, operating
system variants or a hundred variations that make your OS build different from
mine. Building and testing software that runs everywhere is a sisyphean task.
" \-- kernel versions, file system drivers, operating system variants (in form
of docker daemon version) are still going to be around and would still affect
containers. So you traded "test on Fedora, RHEL, Ubuntu" with "test on AWS
docker, Tectonic, OpenShift". Yes, you no longer have to worry about .so
versions, but there are still plenty of reasons to test in multiple
environment.

"... the operating system is an implementation detail of the higher level
software. It's not intended to be directly managed ..." \-- yeah, so I have
this proprietary SAN.. or a dual 10G cards which need to be tuned and
bonded... or a mix of fast and slow disks... or even any non-trivial RAID
setup... Those things are the most annoying parts of managing your own servers
and unfortunately they are not going anywhere.

------
walterbell
There is a proliferation of OS-native hypervisors (Windows 10, macOS, FreeBSD,
OpenBSD, Linux) which use hardware virtualization to isolate workloads which
may be VMs, containers, unikernels, apps or even processes.

------
pslam
There is a monumental amount of "faster horse" in modern OS deployment.
Hypervisors, and their ugly cousin - secure monitors, are an example of
Conway's Law. They exist because there are different groups involved in
writing the kernel, user land, and server deployment.

The author seems to feel that everything will concentrate at the hypervisor
level - kernel and user land just a "detail" as they are single-purpose.
However, you should spot that really the kernel and userland can be compressed
into one layer. So why are there 3 layers at all? Then we have kernel and user
land again and hey we just came full circle.

Again, it's just a reflection of how the organizations are arranged. Given a
single organization with authority over the entire stack, it would be a
terrible waste to have that 3 layer stack.

------
winter_blue
Library operating systems like _IncludeOS_ are a further extension of this
phenomenon: [http://www.includeos.org](http://www.includeos.org)

With IncludeOS, you don't even have a full kernel. You just have the bare
necessities, and run your code in real mode in ring 0.

~~~
cheiVia0
So your web server code has direct write access to the hard drive firmware?

~~~
yellowapple
If you run it on bare metal.

I reckon the point of IncludeOS is to run an application directly on a
hypervisor like Xen (instead of needing a separate guest OS).

------
dkarapetyan
Wise man once said the only thing new is the history you don't already know.
The process is a perfectly good unit of distribution, does not require
orchestration, integrates nicely with PID1, leverages OS services for getting
work done, can talk over the network, start other processes, etc. The reason
containers have taken a foothold is not because the process is not a good unit
of abstraction and distribution. The reason is dependencies have gotten out of
hand.

Ask anyone that deploys fat jars to run on the JVM. They've been leveraging
containers for ages now. Ask the golang folks how they like deploying single
binaries. The OS is not going away. Better process isolation is the future so
I'm gonna say the author has a slightly too futuristic view of things. The
current container ecosystem and the churn around it is a half-measure for
better process management.

~~~
pjmlp
The day I moved from C++ back to Java and other languages with richer runtimes
at work, was when I stopped caring about the underlying OS.

A JEE container, is already a full OS adding the remaining services that a
plain JSE installation might still lack, it doesn't need yet another layer to
waste resources and add more administration work in maintenance and security.

~~~
sqeaky
I write in C++ and I do not care about the underlying 98% of the time anyway.
That 2% of the time I might write something that needs a kernel call, but I
will make a class or some other abstraction that represents it and put all my
different implementations in there.

For code that cares about word size or something similarly pervasive but
platform dependent there are templates and constepxr to have the compiler
evaluate things. _I_ won't be putting constants like 32 or 64 in my I will do
things like "align_to(system_details::cache_line_size)" and let my compiler
handle the details.

~~~
pjmlp
Which is kind of true if you can stick with the standard library and control
which compiler gets used.

Back in those days, we were deploying code across heterogeneous OSes (not all
POSIX), using the OS vendor provided compilers, which were still catching up
with C++98, let alone C++03.

So this really restricts how much you can make use of the standard library and
which third party libraries to use in a portable way.

Then is time for #ifdef party.

~~~
sqeaky
I covered exactly this. I write code for windows/Linux/Mac OS X. Write a class
to contain those ifdef. I did this before C++11 as well.

This isn't a new technique, but for some reason people adopting C++11 and
C++14 seem more willing to do it than people who wish C++ was really just C
with classes.

Also, write unit tests to test the class. Run the Tests in CI on every
platform and a variety of configs. It is not hard and there great free or
cheap tools Like Jenkins, TravisCI and Appveyor.

~~~
pjmlp
> This isn't a new technique, but for some reason people adopting C++11 and
> C++14 seem more willing to do it than people who wish C++ was really just C
> with classes.

This was exactly part of the problem.

Most of the enterprise code I used to deal with was not even that, rather
compiling C with C++ compiler.

However my point with rich runtimes was that you shouldn't bother to even do
that, the runtime is the OS, kind of.

------
digi_owl
The title and the content are wildly divergent...

------
hannesm
Processes? Seriously if there's only a single one, why would you waste
resources on process management and process information? Just get over it ;)
[http://unikernel.org](http://unikernel.org)
[https://mirage.io](https://mirage.io) :D

~~~
icebraining
There isn't a single one, there's at least one per container.

------
frik
The end of era, yes. But not what you think. Linux, *BSD and Android (Linux)
are in so many devices. And new general purpose operating systems like Google
Fuchsia are in the pipeline. It's the end of old closed OS, their time is over

~~~
pjmlp
> It's the end of old closed OS, their time is over

So where can I get PS4 BSD or those Google layers not available in the AOSP
repository?

------
noescape
It's happening for the operating system. Will it happen for the programming
language?

