

Renraku: Future OS - daeken
http://daeken.com/renraku-future-os

======
old-gregg
I am convinced that this is the right path for OS evolution. Perhaps not
necessarily .NET based, but the future OSes simply have to borrow heavily from
the best of the current VM implementations: garbage collection, JIT
compilation and rich standard library should all be provided by the OS (as
opposed to in-process VM) and shared between all processes written in higher
level languages.

The separation of kernel/userland must also go away and process isolation
should be done via code verification, another trick a modern (yet non
existent) OS must borrow from sandboxing VMs.

~~~
uriel
Heard of Inferno? <http://code.google.com/p/inferno-os/>

Bell Labs as usual well over a decade ahead of their time...

~~~
daeken
Inferno is very interesting. However, I tend to believe that having the entire
kernel written in managed code is the way things need to go. By doing so, you
reduce the attack surface to a tiny amount of code and make it easier to
develop.

That said, Inferno was way ahead of its time, and quite a few of the ideas in
my system are based on ideas from it and Plan 9 itself; for example, the
'everything is an object' paradigm is really a natural extension of Plan 9's
'everything is a file' paradigm.

~~~
uriel
> for example, the 'everything is an object' paradigm is really a natural
> extension of Plan 9's 'everything is a file' paradigm.

I used to think that way long ago, over time I realized that it is the
inverse: 'everything is a file' is an extension and very powerful refinement
of the 'everything is an object paradigm.

What makes file based interfaces so powerful is that they provide an uniform
and almost universal framework for representing any kind of resources. The
constrains that this imposes are _very useful_ both technically (eg for
uniform caching, remote access, proxying, filtering, namespaces, ...) and as a
way to narrowly focus the mind when designing interfaces.

~~~
daeken
I wasn't speaking in terms of actual relations (since files are a subclass of
objects in my mind), but in terms of what inspired me.

That said, I disagree that files are a more powerful refinement. So much of
our code is spent converting to and from files, it's a curse more than a
blessing. The only really powerful thing about files is that there are tools
on our current systems to manipulate them, but I don't think that has to stay
that way. Why can't we grep over a collection object from the command line
like we do with a file now? Why can't we have good network-transparent objects
(a key feature in Renraku)?

The file paradigm in general is tired, in my opinion. Streams just aren't a
good mapping for the way we handle data.

~~~
DougWebb
Perhaps the design should be "everything is a Resource" (ala REST) rather than
"everything is an Object".

The beneficial aspect of "everything is a File" is the uniform interface: you
can read a file, write to a file, seek to a particular position in a file
(sometimes), and that's about it. It seems limiting, but that's what allows
the huge number of interoperable tools to be built.

By going with "everything is an Object", there are no constraints on the
interface. Every class of objects has it's own set of methods, and tools need
to be designed for specific classes/interface rather than for "everything".
Interoperability will be lost.

Resources are like objects, but constrained to uniform interface: their
methods are GET, PUT, POST, DELETE, OPTIONS, HEAD. That's all the methods you
need to manipulate individual objects and collections of objects. Of course,
you'll need uniform identifiers (URLs) for the objects, and a uniform
representation (or a set of standard representations.)

This will give you network-transparent resources, assuming you use globally
unique URIs. It also turns the OS into a generic Web Service. I'm not sure
what the implications are of that, but it seems like it might be interesting
to explore.

~~~
berntb
>>The beneficial aspect of "everything is a File" is the uniform interface:
you can read a file, write to a file, seek

My C days weren't in this millenium, but ever tried this? :-)

    
    
      man ioctl
    

(A quick check shows that you get the real info in _man ioct_list_. Even
fcntl() has some extra, like locking parts of files.)

~~~
uriel
ioctl was a mistake by people that didn't understand the "everything is a file
principle" (a _huge_ mistake I might add).

The original Unix from Bell Labs had no ioctl, Plan 9 has no ioctl, and the
Linux people have been claiming to want to eventually get rid of all ioctls
due to all the problems they cause, but the inertia and all the people that
seem incapable of writing interfaces without ioctl means it will be ages
before they get there.

~~~
berntb
So locking parts of files will be a file-based api?

Do sound weird.

------
wvenable
My suggestion: plan on adding some kind of real file system. I've worked with
OS's that try and de-emphasize (or eliminate) the traditional file system and
it just doesn't work. Programmers eventually hack their own "file system" to
work with the files that everyone has. When every computer is networked and
connected to the Internet, communicating by files, file names, and file
extensions, is universal.

~~~
daeken
I agree, this is essential. The reason I haven't mentioned it is that I simply
have no idea how to do it in Renraku yet. That said, I have a lot to figure
out about the object store in general, so I'm hoping it comes to me during
that.

------
sb
i have looked at the other comments and what i found was lacking was to
address the versioning problems with objects--which can be a problem for
files, too. there was a very good article on evolving APIs in the eclipse
technical articles (2003~ish), but i guess that is not the way to go for an
operating system...

(then again, .NET has versioning capabilities, but i guess the "interface"
becomes complicated very fast)

~~~
omouse
Indeed, versionning is very important. The trouble will be the structure. Sure
everything's an object, but how will you keep track of differences in images
or text documents? You might need something more fine-grained.

------
profquail
I saw a news article today from about a month ago, saying that Microsoft had
decided to release the .NET Micro Framework SDK and porting kit for free (it's
available for download now, I checked). I've been wondering for a while why
people don't use it (or something similar) to write device drivers; it seems
to me like that would go a long way towards making a system rock-solid.

Also, one good thing about a properly written, all-managed system is security.
Assuming that the underlying OS is bug-free (a huge assumption, I know), would
there be any way to exploit such a system remotely?

One other cool feature I thought of would be to implement some base classes
for things like images, sounds, and movies, then implement various codecs and
file formats using formatters and such in the System.Runtime.Serialization
namespace. Thus, it'd be pretty easy to add support for a new codec or file
format, since the codec class' assembly could just be copied to a special
directory, then loaded via reflection.

A final note...think about how awesome it would be to have a full-featured,
well-written managed OS. Since everything on top of it is also managed, it'd
be very easy to port it to another platform (all you need to do is port the
very fundamentals of the OS, and the CLR takes care of the rest!)

~~~
daeken
Managed code ensures things like buffer overflows are a thing of the past, so
long as your compiler is secure. It doesn't prevent against the design flaws
that often lead to security breaches, but it's a start. I'll take securing a
compiler and design over securing hundreds of millions of lines of code,
though, any day of the week.

Edit: As for codecs and such, this is why I like my object store idea. Your
codec class would encode to a bitstream like now, but the entire class would
be there on disk. You could send the whole object across the wire and so long
as they are using the same ICodec (or whatever) interface, it Just Works (TM).

------
cturner
Something I've been thinking about - "Everything is a filesystem" seems to be
a more powerful focus than "Everything is a file". It encourages you to think
about wrapping file-system driven APIs around applications. This is a focus in
my current project.

------
gabriel
This also reminded my of the Singularity research project from Microsoft:
<http://research.microsoft.com/en-us/projects/singularity/>

I've actually used some of the concepts in this research for writing a secure
distributed testing platform. So it has many applications outside of the
operating system.

Really cool stuff is going on in this area between OS and languages.

~~~
daeken
Had to double-check my article to make sure I didn't cull out my Singularity
reference, as it was a big influence of mine in Renraku's development. If
Singularity were released under a non-tainting license, Renraku likely would
be a distribution of it rather than an OS unto itself. MSR is doing some
amazing things; can't wait to see what else they do with it.

~~~
gabriel
I noticed Singularity hadn't been mentioned directly on HN before so that's
why I threw a link in here :)

I also wish the Singularity project had a friendly license. I also had to do a
lot of work up front, so congrats on the work you've done! Truly cool.

~~~
pohl
It's good to see some folks on HN who have an interest in Singularity-like
concepts. It makes me want to take this opportunity to ask if anybody has
noticed the similarities between those projects and things going on over in
the LLVM world.

To my eyes, LLVM brings many of the same things to the table as the Bartok
compiler, which also uses an SSA IR to provide the safety needed to run
everything in ring 0.

Furthermore, if one reads the pubs directory over at llvm.org, one sees
research papers where a few instructions were added (LLVA) that give LLVM the
ability to host a modified version of linux where everything is managed within
LLVM, save a very tiny shim between LLVM and the hardware.

There's also some papers on LLVM-SVA (Secure Virtual Architecture) where the
same concept is extended to "enforce fine-grained (object level) memory
safety, control-flow integrity, type safety..."

So to my amateur eyes, it looks like these research projects are very similar,
with one being less overt about the direction it's headed.

Am I high? Has anybody else noticed this?

~~~
gabriel
Yes! I've also seen the similarities between all of these things and LLVM.

Part of LLVM is an interest in correctness. I've seen more of an interest in
these areas in research as well. For example, there was even a recent research
highlight in an ACM magazine about "Formal Verification of a Realistic
Compiler": <http://pauillac.inria.fr/~xleroy/publi/compcert-CACM.pdf>

Plus, newer companies like Coverity (<http://www.coverity.com/html/research-
library.html>) bring a sense of credibility to a practice that hasn't had much
traction in the industry.

I think all of these ideas can come together to make something quite useful.
But I suppose bringing it all together is the hard part :)

Update: I also saw lots of cool associations to the Self Programming Language
(<http://research.sun.com/self/language.html>), which includes some great
research, especially in their paper "Self: The Power of Simplicity":
<http://research.sun.com/self/papers/self-power.html>

~~~
russellallen
More current Self link is the official homepage at <http://selflanguage.org>

