Hacker News new | comments | show | ask | jobs | submit login
Systems Software Research is Irrelevant (2000) [pdf] (bell-labs.com)
53 points by tush726 1034 days ago | hide | past | web | 20 comments | favorite



This paper is more than 10 years old. Since then a new systems research arena has emerged: virtualization / hypervisors. (Not exactly new, but new on the consumer market. Before, VMs were only in the mainframe land.)

"Compatibility" has killed OS research, because the OS's main job is to multiplex the underlying hardware among applications, and it seems that the current OS's do that well enough.

I think that "systems research" has today moved towards building infrastructure on top of VMs (Java / .Net). There's a lot of innovation there which would have before went into a new shiny (and incompatible) OS. (Example: Java security policies.)

Also: writing a new OS requires a lot of upfront investment with _zero_ research value: boot/startup code, managing the CPU's idiosyncrasies, _drivers_ for bazillion of different HW units, a filesystem, TCP/IP stack, etc. No wonder people just tweak existing systems, as it's almost impossible to rip out these boring parts from Linux/*BSD kernels and reuse them in something new.

My personal wish for furthering system's research of today is something that would kill the need for hypervisors. Android, for example, runs each app under a different UID, thus achieving isolation w/o hypervisor. Let's devise something similar for desktops/servers; bonus points if the system can run existing binaries.


How about OSs that target the hypervisors? With hypervisors getting everywhere, it becomes a viable platform on which we can re-evaluate the assumptions we make about OSs.

For example, I'm working with a bunch of other people on MirageOS [1], which is clean-slate, library OS written entirely in OCaml and targets the Xen Hypervisor. Since Xen works on ARM devices, we'll also be able to create applications/appliances for the coming wave of embedded devices (aka Internet of Things) -- in addition to running things on the public cloud.

If you're interested in reading more, there's an ASPLOS paper [2] where you can find out more. There will also be a CACM article in the New Year that covers more of the wider background.

[1] http://openmirage.org and http://nymote.org/blog/2013/overview-of-mirage/

[2] http://nymote.org/docs/2013-asplos-mirage.pdf


> How about OSs that target the hypervisors?

Yup, that's definitely a new area. I guess you're using the hypercall API instead of pretending to talk to the raw HW?

In a way, the hypervisor has become "the" OS, while "an" OS is now just an application running within the hypervisor. Can we avoid the additional (hypervisor) layer? I think yes. But: 1) how, and [harder] 2) also while maintaining compatibility with existing applications? (Not existing OS's, just existing applications.)


Does something like http://erlangonxen.org/ fit into this area?


Since you're in this domain, what are others topics or projects you're interested in ? (maybe you don't even have time for that).

ps: I love the idea of operating system + ml. I remember BitC tried to bring types on the metal ~without success.


That's actually my favorite area of research at the moment.


What I find fascinating about this talk is that almost every point was true then, but not now. Between tablet computing, and the popularisation of dynamic languages and scaling / big data problems that came with web 2.0, everything has changed.

I think the talk shows how much can change in 10 years.

What would a talk like that today look like? What field would it be about?

And also, while he was writing this, what theoretical research was crystalizing that shaped today?

There is a problem with academia - it often is used by the government to answer industries inability to invest for the long term, rather than fixing the economic/legal system so that it is incentivised to. All this life sciences research oriented towards medicine going on at universities should be privately funded. Same with the other current booms in accademia like renewables. That they are not shows that something is broken in our version of capitalism.


I'm currently working in an academic setting and from my observations the "phenomenology" approach to papers is still very prevalent. I see a lot of papers which are just measuring A vs B vs C in some way.

The point about project size is also still valid. There is no way a small research group at a university can "compete" against Rackspace or Amazon in cloud computing innovation.


Yes, there is also a plethora of new languages popping up and actually getting some traction, while 10 years ago probably was the high watermark of java monoculture.


There is an argument to be made that "the limitations of traditional UNIX systems [have been addressed by] hypervisors and containers." This is a point Glauber Costa made about a year ago, and was discussed in the LWN article about the talk: LCE: The failure of operating systems and how we can fix it [0].

Seeing your reply to rictic, it seems you want something else? I'm not sure what that is. You want some automation to this sandboxing for each application? Either way, I think rictic was right to bring up the work that has been done in the Linux kernel with KVM, namespaces, etc. As one commenter on the LWN article points out, "with KVM we call hypervisors 'Linux'".

[0] https://lwn.net/Articles/524952/


The Linux kernel has within it a lot of namespace facilities mostly meant to support os-level virtualization. Because of how these are constructed, however, you can construct a system that is only partially virtualized. For instance, you can run your binary that has an isolated file system and process space, but does not use a virtualized network interface (or vice versa).


Are you familiar with the work that's gone into the linux kernel in the past few years to do just that? Some search terms for further reading: lxc; docker; lmctfy.


I'm aware of these, as well as of Solaris containers. But they all seem to do too much. I would like to have a "container" for each instance of an untrusted application. One for Firefox. One for Thunderbird. One for acrobat reader. Assign different, overlapping mounts to /home/user for each application, but let the root container manipulate them easily. Etc.

The only thing I'm aware of that does something akin to this is Rutkowska's Qubes OS.


What's stopping you from doing that right now?

It's just a ton of system administration work. I've been playing with containers and chroots lately, and I actually can do exactly what you're asking for. Right now it's for servers (i.e. always run a mail server, DNS server, web app etc. in their own containers). But eventually I would want to do it for desktops, and sort out issues with X and sandboxing, etc.

My personal wish for furthering system's research of today is something that would kill the need for hypervisors. Android, for example, runs each app under a different UID, thus achieving isolation w/o hypervisor. Let's devise something similar for desktops/servers; bonus points if the system can run existing binaries.

This isn't really a research topic. It's mainly an implementation issue. It's possibly a standards issue, but I think giving each app its own POSIX environment with Linux namespaces gets you almost all the way there.

The main obstacle I'm having is that distros like to spew files all over file system. So although I could technically use raw .deb packages, I'm ending up making my own packages that are bettered suited to the "every application in a container" model.

So to make what you're asking for a reality, I think the things that need to happen are:

- Linux needs to get hardened more (i.e. right now it is not entirely safe to run untrusted apps unless you are adding something like seccomp, e.g. which ChromeOS contributed to the kernel for this use case)

- There needs to be the equivalent of Debian ("the universal operating system") for applications in containers. This just requires a lot of mucking with configure and build scripts. It's not hard, just tedious. Application authors have to learn to package a slightly different way.


> It's just a ton of system administration work.

Let's say I'd like to have a context menu where I can choose "Run in sandbox", then I would either choose a preexisting sandbox configuration or define a custom one. IMO, there's a plenty of research do be done on how users can seamlessly define allowed data-flows (what the sandboxed application can read and write), and how to implement the allowed data flows.

There's a fine line between "implementation issues" vs "research topic". Research can touch other fields than technical, e.g., HCI.

As an example, GPG is a powerful system, but all available UIs and "integrations" with existing mail clients are clunky at best. We don't know how to create a good crypto UI for GPG. There's a research topic.

Therefore, even the technically best sandbox won't be used if it entails a "ton of sysadm work". (For example, I have Linux installed in virtualbox VM, but I don't use it often because the integration with the hos OS -- e.g., clipboard sharing -- is clunky.)


> I would like to have a "container" for each instance of an untrusted application.

You might like to look at Bromium (bromium.com), founded by a bunch of ex-Xen folks. I think they were doing something like this for the enterprise.


> The holy trinity: Linux, gcc and Netscape. [...] Besides, systems research is doing little to advance the trinity.

It's easy to miss that this was pre-LLVM. LLVM was an interesting development from the systems/design perspective more than the compiler technology one.


"Programmability-- once the Big Idea in computing-- has fallen by the wayside." ouch


No mention on OSX and iOS, is Apple the new Microsoft these days?


I think touch as a UI component qualifies as innovation, but OSX and iOS are really just other variants of Unix.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: