
We Need More Operating Systems - Alfred2
http://blog.acthompson.net/2013/01/we-need-more-operating-systems.html
======
ChuckMcM
It is interesting how folks who come from a commercial exposure to OS'es hated
UNIX. I admit I was one of them, I was a big TOPS-20 fan, thought RSX-11M had
a lot going for it, Unix not so much. Then I went to work for Sun Micro and
later joined the kernel group there.

Operating systems are 'easy' in the sense that a small group can create them,
and pretty much do all the legwork needed to get them into a product. Building
hardware is more complex, it involves coordinating manufacturers, part
suppliers, and the software to run it.

What happens then is if you're a hardware manufacturer _and_ an operating
system supplier, then you can tell your designers to make the hardware like
you want, you can write the libraries you need in the OS, and everyone
leverages the work of everyone else. In the Windows world the 'duopoly of
Microsoft and Intel' led to an artificial relationship between hardware
manufacturer and OS vendor but it was the same.

But if you're _not_ a hardware supplier, you spend an inordinate amount of
your operating system resource trying to deal with random bits of hardware you
might want to use. Then your OS makes as few assumptions about the hardware as
possible and it becomes as simple as it can in order to avoid incurring a huge
overhead in getting new hardware into the system. It was this latter feature
of UNIX which I really didn't appreciate until I went to work at Sun.

I agree with Thompson that it would be nice to have more general purpose
operating systems. They are out there, Spring, Chorus, eCos, VMX, Qnx, Etc.
but generally they target a more specialized niche whether its embedded or
research or some level of security. I'd love to see more accessible OSes as
well.

One of the projects I started a while back has been an OS for ARM systems that
is more like DOS or CP/M or AmigaOS than UNIX or Windows. The thought was
something simple that was straight forward to program, not your next Cell
Phone OS or something that you'll need a couple of Gigabytes of memory to run,
just something you could write and deploy code on for fun or educational
purposes. It has been fun to look at "apps" like this. Imagine something that
had a shell that was sort of like the Arduino processing UI except self
contained. But again, its not general purpose. It's just a tool.

I think the next wave in the 'real' OS wave is going to be something very much
different than what we think of today, something where it assumes there is a
network behind it and that the rest of itself is out there somewhere as APIs.
Of course civilization may collapse first :-)

~~~
javert
> I think the next wave in the 'real' OS wave is going to be something very
> much different than what we think of today, something where it assumes there
> is a network behind it and that the rest of itself is out there somewhere as
> APIs.

I'd be quite interested to hear more about this, if you have more ideas and
care to elaborate.

~~~
ChuckMcM
Understand that I'm a networking guy from way back so I tend to see in that
light :-)

So there are a number of forces at work in the market, one is the decreasing
cost of compute and storage, another the increasing availability of wide
bandwidth networks. A seminal change in systems architecture occurred when PCI
express, a serial interconnect, became the dominant way to connect "computers"
to the their peripherals. Now sprinkle in the 'advantages' of keeping things
proprietary, with the 'pressure' of making them widely used, and those bread
crumbs lead to a future where systems look more like a collection of
proprietary nodes attached to each other by high speed networking than the
systems we use today.

From that speculation emerges the question, "What does an OS look like in that
world?" and so far the only thing I can think of that seems to work is a bunch
of 'network apis.' Lets say you build your "computer" with pieces that are
connected by 10Gbit serial pipes and a full bandwidth cross bar switch. Your
'motherboard' is the switch, the pieces are each completely 'proprietary' to
the manufacturer that built them but the API is standard. This is not a big
change in storage (SAS, FC, and SCSI before it have always been something of a
network architecture) its becoming less uncommon for video, and why not
compute and memory? Memory management is then a negotiation between the
compute unit and the memory unit over access rights to a chunk of memory.

The Infiniband folks touched on a lot of these things 10 years ago but they
were a bit early I think and they aimed pretty high. But they have some really
cool basic technology and show how such a system could be built. Trivially so
if there were a 'free API' movement akin to the FOSS movement.

This removes the constraint of size as well. And if you're cognizant of the
network API issues amongst your various pieces you get to the point where your
'OS' is effectively an emergent property of a bunch of nodes of various levels
of specialization co-operating to complete tasks. Computers that can be
desktop, or warehouse sized, and run the same sorts of 'programs'.

~~~
icebraining
Have you seen Plan9?

 _A Plan 9 system comprises file servers, CPU servers and terminals. (...)
Since CPU servers and terminals use the same kernel, users may choose to run
programs locally on their terminals or remotely on CPU servers. The
organization of Plan 9 hides the details of system connectivity allowing both
users and administrators to configure their environment to be as distributed
or centralized as they wish. Simple commands support the construction of a
locally represented name space spanning many machines and networks._

<http://plan9.bell-labs.com/sys/doc/net/net.html>

~~~
ChuckMcM
Yes, I am aware. I was at a Usenix conference where Rob Pike presented a paper
on it, back when it was a bright idea out of Bell Labs. It is the curse of
brilliant people that they see too far into the future, get treated as crazy
when they are most lucid and get respect when they are most bitter [1]. I was
working for Sun Microsystems at the time and Sun was pursuing a strategy known
as "Distributed Objects Everywhere" or (DOE) but insiders derisively called it
"Distributed Objects Practically Everywhere" or DOPE, it was thinking about
networks of 100 megabits with hundreds of machines on them. Another
acquaintance of mine has a PDP 8/s this was a serial implementation of the
PDP-8 architecture, Gordon Bell did that in the 70's well before serial
interconnects made sense. It was a total failure, the rest of the world had
yet to catch up. Both Microsoft and Google have invested in this space,
neither have published a whole lot, every now and then you see something that
lets you know that somebody is thinking along the same lines, trying to get to
an answer. I suspect Jeff Bezos thinks similarly if his insistence on making
everything an API inside Amazon was portrayed accurately.

The place where the world is catching up is that we have very fast networks in
very dense compute. In the case of a cell phone you see a compute node which
is a node in a web of nodes which are conspiring to provide a user experience.
At some point that box under the table might have X units of compute, Y units
of IO, and Z units of storage. It might be a spine which you can load up with
different colored blocks to get the combination of points needed to activate a
capability at an acceptable latency. If you can imagine a role playing game
where your 'computer' can do certain things based on where you invested its
'skill points' that is a flavor of what I think will happen. The computers
that do shipping, or store sales will have skill points in transactions, the
computers that simulate explosions will have skill points in flops. People
will argue whether or not the brick from Intel or the Brick from AMD/ARM
really deserves a rating of 8 skill points in CRYPTO or not.

[1] I didn't get to work with Rob when I was at Google although I did hear him
speak once and he didn't seem particularly bitter, so I don't consider him a
good exemplar of the problem. Many brilliant people I've met over the years
however have been lost to productive work because their bitterness at not
being accepted early only has clouded their ability to enjoy the success their
vision has seen since they espoused it.

------
npsimons
(upfront confession: I love Linux)

Ah yes, the old "why did UNIX win? OtherOS was so much better!" chestnut.
Reading the article, and yes, the usual suspects come up: VMS, TOPS-20, etc,
etc. What I'd like to know is _why_ these OSes were supposedly better, or at
least list some of their advantages (eg, versioning filesystems[1]). I'm not
questioning the pedigree of the author (someone who actually worked with
almost a dozen really different OSes), but the article is almost vacuous, and
entirely predictable.

I don't doubt that maybe back then UNIX was a dog (just look up the UNIX
haters handbook sometime), and it would be really neat to see some
revolutionary research in OSes gain traction (my interests in this area
include too many to list, but Plan9, Coyotos and Oberon come to mind).

But I think a few important factors have lead to Linux leading the pack:
first, it's open source. Second, "worse" is better[2]. And perhaps most
important, Linux will (probably) be the last OS[3].

[1] -
[http://en.wikipedia.org/wiki/Versioning_file_system#Files-11...](http://en.wikipedia.org/wiki/Versioning_file_system#Files-11_.28RSX-11_and_OpenVMS.29)

[2] - <http://dreamsongs.com/RiseOfWorseIsBetter.html>

[3] -
[http://linux.slashdot.org/comments.pl?sid=101317&cid=863...](http://linux.slashdot.org/comments.pl?sid=101317&cid=8633695)

~~~
jacquesm
> Linux will (probably) be the last OS[3]

As a long time linux user I sincerely hope you're wrong about that.

There is definitely room for something more modern.

~~~
jspthrowaway2
Are you talking about a new kernel or a new userspace? It's an important
distinction.

~~~
jacquesm
I'd be very happy to see something like QnX, unix on the outside, robust
message passing kernel with clustering built into the heart.

QnX is _much_ more than just a neat little kernel to do embedded stuff with.

~~~
spc476
I remember working with QnX back in the mid-90s and found it quite fun
(compared to Unix at the time). A friend of mine worked at a company that made
commercial X servers and that company's fastest X server ran on QnX (yes, even
with the overhead of message passing, it was faster than their Unix version on
the same hardware).

~~~
jacquesm
Hummingbird by any chance?

~~~
spc476
No, it was Metro-X from MetroLink.

~~~
jacquesm
That vaguely rings a bell but I don't think I've seen it in action. Pity!

Maybe the future will bring us something like that again, I sure wouldn't
mind. The nay-sayers argument that 'micro kernels are slow' seems to be mostly
limited to those that have never actually used a micro-kernel based OS for
anything. What I remember was - for the time especially - nothing short of
astounding performance, 30K slices / second on a lousy 486/33 was not
exceptional at all. At the time most other OSs were doing 20 or so...

------
pnathan
I'd like to twist the title to be, "We need to solve more hard problems".

A considerable amount of what we do in computing is either (1) re-solving old
problems with a new shell/language/os/widget library or (2) plumbing between
said solutions for communication. This is a fairly ridiculous state of
affairs. At the risk of being a snob & starting a flamewar, I'd like to point
out that node.js doesn't need to exist: it reimplements systems that already
existed, and forces a contorted programming style. Why isn't that energy
poured into Erlang or better libraries in C++ for event handling? The same
argument can be made for a number of different technologies (why do we have F#
divergent from OCaml, why do we have D instead of a better C++? why does MS
have to keep spinning their APIs and forcing rework?).

My point is, when something is solved, it's good to move on until a compelling
reason is found. Operating systems are largely solved, but most of the
research stems from the 60s-80s, AFAICT. A number of concepts brought up back
then got dropped in the rush to PCs and DOS. One that I particularly remember
is the idea of multilevel security partitions. OK, maybe that's a bad idea.
But it'd be good to look into it again. What about operating systems written
in dynamic languages? What would that look like in a
multicpu/multithreaded/multiuser environment? It'd likely be absurdly slow,
but maybe it doesn't have to be. There's got to be some great work that was
done in Genera that can be leached into the modern world.

Why, for instance, is the most common text entry solution a QWERTY keyboard?
Why even a keyboard? Isn't there a better solution (voice has worked really
badly for me, sadly). Isn't there a better way than what amounts to a revamped
typewriter? What about the mouse? My mouse looks suspiciously like Englebart's
mouse from '69 or so. Tablets are a _great_ step in this regard. I don't
really _like_ them, but I have great respect for Apple, Google, and MS for
trying to move beyond the WIMP paradigm. Even if in a decade we still use it,
at least we know it's a local maxima. :-)

------
RyanZAG
An OS just needs to get out of the way and let applications work so that
people can actually get things done. If the OS is succeeding in allowing the
required applications to be made, then we don't need more operating systems.
If it is failing to cater to the specific needs, then a new operating system
will be created to run those apps - see iOS, Android.

You don't build an OS and then build things for it. You build things you need
to run, and then you build an OS if you can't run them without it. The article
has this relationship backwards imho. In the same way, many people create
frameworks and then try to fit them to problems instead of creating frameworks
to solve problems.

------
tinco
I think the time that new operating systems are able to gain traction is right
on our doorstep.

The bar for making a desktop OS has been lowered from needing to support
Microsoft Office or implementing a feature-equivalent office suite, having a
full fledged UI system and many system libraries to just needing webkit to be
ported to it.

When your operating system supports html and javascript it can already run 90%
of what most people do with their computers.

To add to that, hardware that is easier to boot (ARM SoC's) is becoming cheap
enough to produce that almost anyone (Raspberry Pi, OUYA) can launch a new
hardware platform.

So the distance is being closed from both sides.

I personally think the next step to really making operating system development
a popular field again is a new programming language. Yes, really, I think a
new C would really be great. A C that has sane syntax and semantics, allows
you to structure your code and supports integrating higher level constructs.

~~~
stephencanon
In what way does C fail to have sane syntax and semantics? You may not _like_
the syntax or semantics, but they are reasonably simple and clean; I would say
they are eminently sane. The portion of the C spec that defines the language
(chapter 6) is actually quite straightforward compared to most language
specifications.

The syntax is especially straightforward. I can find a few complaints about
the semantics (mostly that there would ideally be less implementation- and
undefined-behavior), but no worse than my complaints about any other language.

~~~
tikhonj
The syntax may be familiar, but I've always thought it's rather arbitrary.
There's relatively little over-arching design: it's like they just stuck
together features until they had something relatively big. The exact selection
of syntactical forms only seems reasonable because it's been around so long
and influenced so many popular languages, not because it's particularly neat
or simple.

Some syntactic forms are rather annoying. For example, you have different ways
to write pointer types: int* foo vs int * foo. (Hmm, I am no good at escaping
asterisks in HN comments.) This just causes confusion; conceptually, the * is
part of the type, but syntactically it isn't! The array syntax is also rather
arbitrary. Why can you do foo[0] or 0[foo] when every other similar operation
is an infix operator? Why do you write int foo[] when the type of foo is
really an int array (say int[]) rather than an int? Why have both an if-
statement and a ternary operator? (Actually, the whole statement/expression
divide is annoying especially in how it influenced a whole bunch of other
languages.) There are a ton of other little inconsistencies and annoyances
(let's not even talk about the C preprocessor). Ultimately, the main thing C
syntax has going for it is familiarity, but that seems to be more than enough.
I certainly find the syntax strictly worse than a bunch of non-C-like
languages, although it's difficult to compare them because they have vastly
different semantics.

The semantics are also a problem. As you mentioned, the overabundance of
undefined behavior is certainly not good. This certainly causes very real
difficulties for even experienced programmers. Then there are a whole bunch of
classic C errors that cause rampant security problems and hard-to-debug
behavior. Now, some of these are inevitable due to C's providing low-level
access for memory management, but others could probably be avoided with
different design decisions.

Now, C is obviously a practical and widely-used language. But, as ever, a
language's popularity is far more a social issue than a function of its
design. The main reasons it seems simple and clear is because it's been around
forever, it's influenced the syntax of most popular languages and everyone has
either learned C or a very C-like language.

~~~
mjcohenw
"Why have both an if-statement and a ternary operator?"

Because one is for control of statement execution and one is an expression
that returns a value. Two similar-in-purpose but different-in-meaning
constructs.

In my experience, if the ?-: operator is written in the same way as the if-
else (condition, true, and false parts each on their own line), nested
expressions are quite clear and understandable.

------
lazyjones
Seconded, but Linux and OSX seem to be the root causes of this
(Net-/Open-/FreeBSD people flocking to these).

What happened to all the extremely interesting efforts like EROS
(<http://www.eros-os.org/>) with its global persistence (i.e. can basically
recover back to the previous state from switching off the system), Amoeba
([http://en.wikipedia.org/wiki/Amoeba_distributed_operating_sy...](http://en.wikipedia.org/wiki/Amoeba_distributed_operating_system)
multiple hosts seen as one) and others that actually brought extremely useful
(even from today's perspective) new paradigms to the table?

Most new OS-related efforts seem to happen around virtualization nowdays.

~~~
ginko
A truly distributed OS would be incredibly useful in times like this where
people may have a mobile phone, a tablet, a desktop workstation and maybe even
their own server.

Having all files, processes and system resources accessible from all my
devices would be great.

~~~
astrobe_
Another Thompson has worked on it with others: It's the Plan 9 project, and
later Inferno. Currently efforts are headed right in that direction both at OS
and language level.

What you describe is already under way for devices of the same
brand/manufacturer is already happening (I'm a afraid make it happen between
vendors is going to be a much longer story). It seems to me like a strong
selling argument and I don't understand that some big consumer electronics
vendor like Sony or LG didn't make it happen already. Maybe the technology
isn't cost effective yet.

I think that the next major step is to make these technoknowlegies both user-
friendly and programmer-friendly in order to make them mainstream. The
innovation the author wishes for already happened 20 years ago. What really
needs to happen is that someone take it out of the cupboard and use it.

------
rossjudson
There are two primary issues. First is the performance of a monolithic kernel
versus a message-passing microkernel. Linux has been tuned pretty extensively;
as a monolithic kernel with a lot of eyeballs trained on it, it's hard to
beat.

Second is the fact that the vast majority of OS kernel source represents deep
OS _design decisions_. There are a lot of decisions to make. If you're going
to make a lot of decisions that are the same as the Linux set, there's not
much point in writing your own code. User space has a set of expectations;
Linux conforms. Traction is difficult with non-conformance.

------
mlacitation
Rob Pike actually wrote something similar in his short, self-proclaimed
polemic, "Systems Software Research is Irrelevant":

<http://herpolhode.com/rob/utah2000.pdf>

"Only one GUI has ever been seriously tried, and its best ideas date from the
1970s. Surely there are other possibilities. (Linux’s interface isn’t even as
good as Windows!) There has been much talk about component architectures but
only one true success: Unix pipes. It should be possible to build interactive
and distributed applications from piece parts. The future is distributed
computation, but the language community has done very little to address that
possibility."

"The world has decided how it wants computers to be. The systems software
research community influenced that decision somewhat, but very little, and now
it is shut out of the discussion..."

And particularly relevant to the HN community at large: "Be courageous. Try
different things; experiment. Try to give a cool demo."

------
jmount
Up front: I like Unix and Linux. But I think a big part of why Unix/Linux won
is you could acquire them. 90% of life is showing up. In the bad old days you
could not buy machines or OS's. Until IBM got investigated for anti-trust you
leased them with a huge service contract. You had to sign non-disclosures just
to use the things. BSD/Unix was available and then became unavailable during
the Unix wars, giving Linux an entry.

------
michael_miller
> I expect some innovative OS research is being done in universities.

Sadly, this statement is predominantly false. That's not to say that there
aren't interesting ideas being tried out -- there are. It's just that the ways
in which Unix is flawed aren't significant enough to warrant switching to a
new system. Yeah, the security model of 'sudo' for any permissions at all
sucks, but it gets the job done on production systems, and there are (ugly)
workarounds to get better security. Yeah, dependency management sucks, but we
have VMs to work around versioning issues. There are tons of people proposing
solutions to these types of problems in academia, but Unix is entrenched,
there's tons of software written for it, and industry is not going to adopt an
alternate platform unless it is 10x better than what's out there. Academia is
proposing systems that are 10% better, not 10x better, and add complications
to the relatively simple process / file model of Unix.

------
AnthonyMouse
I don't think we need more operating systems. We just need more ideas, and to
make sure that our existing operating systems are modular enough to allow them
to fit into the picture.

Trying to recreate an entire operating system from scratch is almost
universally doomed, even when the end result is something "better" -- Plan9,
BeOS, take your pick. Doomed. Because if it's too great a departure from
existing platforms all at once then after you're done reinventing the wheel
somebody has to go out and build all new roads before you can actually deploy
it at scale. All the apps people expect have to be ported (and who is going to
port them to an OS nobody uses because it has no apps?), all the drivers for
all the different hardware have to be written (and who is going to write them
for an OS nobody uses because it has no drivers?), on and on.

Which doesn't mean they aren't useful as research projects rather than
operating systems: You build something that works on one hardware platform and
has no apps, but does something cool, and now you've got a proof of concept
and the operating system(s) people actually use can adopt it. And then after
they do, the research project OS fully dies because it no longer offers
anything novel and it still has no apps and no drivers and no developer
community hunting bugs and so forth.

The thing we need is an operating system modular enough to allow the
implementation of new ideas. Which we pretty much have; and if you can
identify an exception, kindly do your part and submit a patch.

Most people don't need a versioning file system, but if you do, they're
available. And if you don't like the existing ones then you can write your own
and plug it in without having to write your own SATA controller drivers. The
list of features in Plan9 is basically a TODO list for the Linux community
with a good chunk of the boxes already checked.

Evolution beats revolution better than 99 times out of 100. All competition is
simultaneously healthy/necessary and a wasteful duplication of effort. Having
the competition occur at a lower level of granularity reduces the
reduplication. Once you get one piece right, that piece can stay as long as it
fits while you keep iterating on everything else, without having to reinvent
and reimplement all the things that already work.

Saying "we need many more operating systems" is like saying "we need many more
mutually incompatible version of Internet Protocol" -- even if you can point
out a hundred reasons why IPv4 and IPv6 are both terrible, you're still wrong.
There are areas where it's better to work collectively toward getting it right
the first time (or at least, getting it right the second or third time instead
of the thousandth time), because incompatibility and wasted reduplication are
not without cost.

------
dreamdu5t
_After all you would have to compete with an OS that is firmly entrenched on
85-90% of desktop systems on one hand and a free operating system on the
other_

Linux proves this wrong. People built that when the market was completely
dominated by Windows.

I think the author misses the real reason we don't see a bunch of new OS's:
Unix variants and Windows solve their problems very well, and at this point
most of the improvement is at the GUI/application layer (a la OS X).

~~~
timthorn
But the server OS market was much more fragmented, and that's where Linux had
its original success.

~~~
Someone
Also, the phone OS market was much more fragmented, and that's where Linux had
its other success.

The desktop, on the other hand… let's say there is disagreement there as to
whether Linux is a success there at the moment.

------
lukego
I'm writing open source networking software that has potential to become an
operating system like Cisco IOS. I see a niche opening up here because
networking firmwares running on hardware have been doing a great job for years
but are now being squeezed out by virtualization. There's room to replace them
with new software (but we have to act fast before everything but the kitchen
sink is shoehorned into the Linux kernel).

This is open source so if you want to do some relevant OS-style hacking then
you're welcome to get involved. It's called Snabb Switch and it's at
<https://github.com/SnabbCo/snabbswitch/wiki>

------
rbanffy
I'd like to suggest we actually need more diversity on every level of the
stack. Just about everything being sold as a "computer" is a modernized
version of the IBM 5150 that runs (and sometimes is built to run) Windows.
Those that don't, run something either inspired or directly derived from
AT&T's Unix. Every computer you buy today has one or more CPUs connected to a
single memory address space. When they have non-volatile memory attached, it's
treated like a disk drive.

Sometimes, when I treat a file, I have the disturbing feeling I'm reading a
stack of cards... Palm and Newton didn't even have files. I spent far too much
time trying to hammer trees into serial stores and vice-versa because those
were the tools of the time. There has to be better ideas needing exploration.

And a new OS idea, for the first time ever, does not require you to give up a
usable collection of software - most software that make up a modern Unix
desktop require little more than X and POSIX. Regardless of how weird your
underlying OS is, as long as you have that, programmers will be able to use
your OS and work on it, which is the only real way to make them write software
for it.

------
zanny
I wish there was a reasonably speced out microkernel OS that, if not gaining
traction, wasn't based in conspiracy theory or academia but in the practical
benefits of having it.

I see a very beautiful world where a small kernel could handle preemptive
scheduling, virtual memory, and provide a virtual file system and virtual
socket abstraction. Devices could use the socket layer and file system to take
control of DMI devices provided by the firmware, but still run in userspace.
If you used the plan9 "everything is a file" model, system devices could
easily exist in their own directory, etc.

Then you use a newer privilege model, where an application derives its rights
from its parent, who can either down-cast their rights or keep them at the
same level as the parent, and the privileges influence the application
specific view of the file system (like in Plan9).

For IPC, I'd really want to put in an effort to use the lower range ipv6
addresses in a highly optimized virtual socket dispatch so you could have _one
protocol to rule them all_. Every IPC and remote communication, to any device,
to any service on any machine, could be done over an abstracted ipv6 layer.
Screw the maligned utility of dbus, unix sockets, RPCs, etc, use what you
already have and optimize the hell out of that.

Also, I'd like to see a reasonable and sane file system. I like how /usr in
Unix has nothing to do with a user anymore. How some systems expect /cdrom to
be a thing in an era where a tremendously small fraction of machines have
baked in CD roms, and those would be SATA devices if not USB externals. The
Linux and OSX top level file systems always drive me nuts.

I'd use that visibility privilege model to have multiple users - an all user,
public user, root user. System binaries would be under root, users would have
visibility on themselves and any other user they are given access rights to
(passworded or not) including the all and public. You can install and
compartmentalize binaries and libraries appropriately. If you want a jail,
create a user without any access privilges on other users, and they can only
run in a personal vacuum. If you do a good security model where only
applications within +/-1 of the privilge heirarchy can see in the socket layer
another process, a jailed user would only have visibility on _servers_ like
what we have in pulseaudio and X instead of hardware like the gpu or alsa, and
those services could be steeled against aggressive input.

Such a system I feel would be a breath of fresh air for writing applications.
If the runlevels were something like:

0: kernel 1: hardware devices and init 2: hardware service servers, daemons,
and login manager 3: max privilge user session, possibly more login managers
and daemons to support lower privileged users. 4..n: restricted user sessions
with limited views of other contexts.

Any layer could explicitly passthrough a higher level service or daemon to
visibility on the lower tiers, but it would need to be explicit, you have a
very limited scope by default.

I'd love to write software in a context like this. Security sounds better to
me, communication is unified, and what you see (in the filesystem) is what you
get.

~~~
MrVitaliy
Have you seen <http://en.wikipedia.org/wiki/MicroC/OS-II> ?

~~~
GeorgeTirebiter
And don't forget <http://www.minix3.org/> which, although unix-like, certainly
tries some new things (e.g. the reincarnation service).

------
dysoco
What is the point of having more OS if even Linux isn't well supported by
driver suppliers or software developers? Just to have more fragmentation?

Sure, I love the idea of Plan9, I love small OS like MINIX and revolutionary
UI ideas... but let's face the hard truth, most of this projects are destined
to live in the research field.

------
augustl
I'm wondering if it would make sense to build an OS based on principles
important in functional programming, such as immutability. I can't think of
any benefits right off the bat, but I'm also no OS designer..

------
bane
This may seem silly, but I'd love to see an extremely simple, single user,
single tasked OS like CP/M, but written to exploit modern hardware.

The incredibly tall software stack we all live on makes our hardware seem much
slower than it actually is, I've long wondered what we could actually get out
of modern hardware if we treated it like we used to treat hardware back in the
80s and early 90s - close to the metal and highly optimized.

~~~
gilgoomesh
> The incredibly tall software stack we all live on makes our hardware seem
> much slower than it actually is

Actually, it doesn't really. Modern OSes take up far more RAM (because more
services are kept available) but when you're performing operation solely in
user space on a modern system, the OS doesn't really interfere. The only
impact on any multitasked system -- process scheduling -- is carefully written
to have an undetectably small overhead when there's only one busy process.

The reason why computers can still occasionally feel slow is:

1) They are processing millions (sometimes billions) of times more data than
1980's era computer.

2) You can always fill up the resources on your computer or overwork your hard
disk. This can slow your computer down a 1000 fold.

~~~
bane
Sorry I should have been more clear. I'm thinking about modern webapps written
in Javascript, living in browser, on top of a modern OS pushing all that
around (while managing security and virus scanning everything crossing through
I/O) with dozens of services running, listening and/or polling various bits of
hardware controlled by a scheduler -- once had a printer driver that ate 100MB
of RAM and 5% of CPU time background checking to make sure I had paper and
enough ink. Sure the convenience of all that is wonderful, but I can't help
wonder if things should be much faster than they are.

It really does seem insane to me that I can, at times, type faster than my
computer can keep up.

------
jakejake
It's interesting to me how different the opinions are of people who are
interested in research and advancing technology vs. business people and
consumers who, ultimately, are the ones using that technology.

I love seeing new progress but I have to admit that my job would be a lot
easier if there was one operating system and one browser!

~~~
npsimons
It's particularly interesting to people like me (and I suspect many others on
HN), because we straddle the line ("wear many hats"). I personally love crazy
ideas ( _everything_ is a file! No, really, look at Plan9!) and wild research
directions. But for my day to day work, where I have to ship? Linux just
works. But I also like having options, both to play with (personal) and to
solve problems (work). In many respects, Linux already has enough flexibility
and options to satisfy me.

As for fragmentation, get used to it; it's part of the job. Things are more
unified now than they've ever been, it might not stay that way, and one of the
major thrusts of the OP (and many comments here) is that there may be _too_
much unification, and not enough variety and options.

------
jspthrowaway2
Whenever these discussions come up people tend to blur kernel and userspace.
It's happening here, and it's not unforgivable since most operating systems
package both together. That's actually half the reason Richard Stallman wants
you to say GNU/Linux, the technical correctness of the name (the other half is
what makes that annoying).

What adventures like Debian on FreeBSD (kFreeBSD) and even Android teach us,
though, is that you can take a great kernel with its sweeping hardware support
and build something better on top of it. The hardware work has been done for
you. Apple could have _easily_ built OS X and Darwin on top of Linux-the-
kernel instead of Mach (what an interesting world that would be), but they had
a limited range of hardware to target and the resources at their disposal to
tailor Mach to their known hardware targets.

The kernel for the Intel architecture is, I think, at this point mostly
figured out. Microkernel had its shot and on Intel we have seen that it's too
slow (even though it makes more sense on paper). Linux is pretty darned good,
and I think any attempt to be better will, after a few years, end up looking
like Linux from several years ago. The Intel architecture has been stable for
a long time, and a lot of clever minds have come up with ways to squeeze every
drop out of the machine in Linux.

When people say "I want a new operating system!" I'm not hearing them desire a
complete kernel rewrite; that's doomed to fail for a plethora of reasons
including hardware drivers. I'm hearing them desire a new userspace and GUI.
Make something like OS X based on Linux and I'd spend $500+ to buy it and fund
its development.

~~~
air
"The kernel for the Intel architecture is, I think, at this point mostly
figured out. Microkernel had its shot and on Intel we have seen that it's too
slow"

Does that judgment include QNX? I'm under the impression that it has quite a
good performance for a microkernel.

~~~
socceroos
It has great performance. And on top of that, it is as stable as a rock.
Disclaimer: I love microkernels.

------
wissler
In order to transcend the modern operating system we need to transcend the
modern programming language paradigms.

~~~
GeorgeTirebiter
Why, may I ask? Isn't that a bit like saying to truly understand the human
brain, we'll need to transcend the use of English?

