Hacker News new | comments | show | ask | jobs | submit login
An immutable operating system (augustl.com)
286 points by agumonkey on Feb 2, 2014 | hide | past | web | favorite | 150 comments



I wrote an operating system in a purely functional dialect of Lisp; http://losak.sf.net

There are a few things I learned in the process that I think are relevant here:

1. Forget garbage collection in the sense of any variant of mark-and-sweep. For an operating system latency is more important than through put, so you really want real time memory management. If you use reference counting, then there is a lazy variant that gives you just that; no pauses due to reclaiming free memory. The downside with reference counting is that it doesn't work with cycles in heap-allocated memory, but if your data is all immutable then you can prevent cycles from being created in the first place.

2. Treat mutable memory as a form of I/O. Even if you just want to use memory mapped I/O, you will need some way to read and write to (some portion of) memory. Set aside a fixed block of memory (say, for instance, the first 10 MB), and only allow that memory to be read and written in an imperative fashion. Keep the managed heap completely separate from that mutable block, and don't allow any direct access to the managed memory.

3. You can actually perform preemptive multitasking entirely at compile time (or at program load time for apps not included when you build a binary for your OS). I've done this for both my OS and for Scheme (https://github.com/ojarjur/multischeme). It works extremely well and winds up being much faster than using the native primitives usually used for building multitasking support. For more details on exactly how to do it see the link to my Scheme implementation.

4. The biggest difficulty the author will hit, and the most likely cause for performance issues, will be in code generation. Since the proposal is for a new bytecode, the author will have to implement their own interpreter and/or JIT compiler for it, and writing an efficient one is an extremely large chunk of work. I've built an operating system and a programming language, and the language is by far the harder project. This is why I went from working on a new OS written in a new language to re-implementing my compile-time multitasking technique for Scheme; it allows me to take advantage of all of the hard work people have done to make compilers for that language which produce fast code.


> For an operating system latency is more important than through put

One interesting exception to this rule is operating systems for network packet routers. These need to be optimized for throughput over latency. I mean, the difference between 10 and 20 million packets per second is important, but the difference between 10 and 20 microseconds of processing delay is not.

This is a fun area to be in at the moment because simple academic-looking code can actually compete with highly optimized kernels like Linux because those have been optimized for something else.


When you're talking about an average latency of single digit us for a regular switch and < 500ns for a low-end performance switch, the difference between 10 and 20 microseconds is massive.


Packets between internet hosts spend about 10,000,000ns in transit and they traverse around $10,000,000 worth of network elements on the path. These numbers are both worth optimizing. I'd say that the lowest hanging fruit for operating system hackers like myself is the cost i.e. serving more customers with fewer boxes. That latency figure is tougher because it's mostly due to the speed of light rather than processing time.


Accidentally clicked on the wrong icon and downvoted you, sorry, my bad.


I gave a corrective upvote :)


> For an operating system latency is more important than through put

The old OS/390 that is still processing your bank statement and your payroll is another counter example that the above statement is too generic.


Network switches don't handle most packets at the OS level, but in hardware.


Author here. The language is the part I also dread the most, and is also the part that will be the least unique and the least interesting to "re-invent".. I'll look into finding an existing language that runs on bare metal and that has (or supports) immutable values.


There were some dataflow languages like VAL or SISAL that did exactly that. The hardware and software on top was built in the 70s and 80s, but such parallel research funding sort of dried out after the cold war ended.

There is a resurgence of dataflow related research again, which you would probably be interested in. For example, Jack Dennis is pushing the 'Fresh Breeze Architecture': a write-only-once memory CPU with in-hardware GC. This should be right up your alley:

http://csg.csail.mit.edu/Users/dennis/final-pact.pdf


Haskell


- HaLVM : https://github.com/GaloisInc/HaLVM

- House: http://programatica.cs.pdx.edu/House/

- SeL4 : http://ssrg.nicta.com.au/projects/seL4/

Emphasis is on the security and verifiability typed, pure-by-default Haskell provides, over OS design.


There's actually been a bit of work on this very idea!

http://programatica.cs.pdx.edu/House/


funny sidenote, one of House dev (Jeremey Bobbio) had a talk at FOSDEM2014 about reproducible builds for debian, similar to Nix (also immutable in spirit) I believe.

https://fosdem.org/2014/schedule/event/reproducibledebian/


The current Haskell execution engine is very imperative indeed.

Instead of graph reduction, you could drive your execution in a completely immutable way by using string reduction. I don't think this has been explored much, I only found mention of it in 80s FP books (and Dybvig's thesis).


What about D 2.0? I haven't used it but I think they changed everything to const by default. Maybe you would have to write a collections library, not sure.


Rust?


Have considered writing the entire kernel in Rust. Haven't considered using it for the system language, will defenitely consider that.


If you do end up using Rust for any part of this project, we'd love to hear about the experience on our mailing list:

https://mail.mozilla.org/listinfo/rust-dev


Forget garbage collection in the sense of any variant of mark-and-sweep. For an operating system latency is more important than through put, so you really want real time memory management.

Is the assumption that mark and sweep is slower than real-time memory management? It's actually been proven false, e.g. the .NET CLR runs faster than native C code in certain situations.

If you use reference counting, then there is a lazy variant that gives you just that; no pauses due to reclaiming free memory.

Reference counting is extremely slow because of how memory caching works. Whenever you update reference counts, you're trashing the CPU cache. It seems like there's no way refcounting could be faster than mark and sweep due to this. What am I missing?


Tracing garbage collection has better throughput than reference counting, but reference counting has lower and more predictable latency.

If you were actually going to try do tracing GC on the operating system level, you would probably have to use one of the more involved GC algorithms that require kernel support like Azul's C4 collector.


> you would probably have to use one of the more involved GC algorithms that require kernel support like Azul's C4 collector

Which shouldn't be such a problem if you're already writing an OS?

Also AIUI the only "hard part" of C4 is making sure objects don't get modified while they're being moved (that's what the barrier's there to detect) - which isn't a problem if everything's immutable.


> Which shouldn't be such a problem if you're already writing an OS?

I guess that could've used more context. I was trying to point out that in the larger landscape of GC algorithms, the one used by .NET is not that advanced, even though it's better than the GC used by common scripting languages.

> Also AIUI the only "hard part" of C4 is making sure objects don't get modified while they're being moved (that's what the barrier's there to detect) - which isn't a problem if everything's immutable.

Would absolutely everything be immutable? How would you implement lazy evaluation with the correct time complexity?

The expensive part of any garbage collector with short pause times is bound to be the read barrier, which is still needed with only immutable data. If the GC is compacting and wants to relocate an object, it has to rewrite all references to point to the new copy of the object.

C4 uses a read barrier to enforce the invariant that all references point to the up-to-date copy on a load of a reference, not just on a use of a reference. It relies on its ability to immediately update all references to reuse compacted pages during compaction, which means that gratuitous amounts of free memory are not required (a common problem with moving collectors).

You might be able to come up with alternate collector design that uses some of the same ideas as C4 while exploiting immutability, but if you loosen the healing read barrier you weaken your ability to reuse memory and put a bound on memory growth, since a reference to an old copy might always escape to the next round of collection.


"Tracing garbage collection has better throughput than reference counting, but reference counting has lower and more predictable latency."

Exactly this. I would actually back off a bit and only claim that reference counting has more predictable latency, but the point is to sacrifice throughput in order to reduce worst case latency.


> It's actually been proven false, e.g. the .NET CLR runs faster than native C code in certain situations.

[citation needed]

and by citation I mean example code that "proves" this.


throughput != latency


I gave an un-conference talk at Clojure days Amsterdam called "Purely Functional OS" on this exact topic and we had an incredibly interesting discussion about the topic.

The conclusion we kept coming back to is that it's technically not all that difficult to implement, but that to make it usable in the real world would mean that computers would have to get a lot more cautious about source vs. derived data.

The main thing is that on the one hand it seems pretty feasible to store all meaningful data a user generates over the course of using a computer and building an immutable persistent OS on top of that.

On the other hand though, when you think about doing this in the context of an actual computer as in use today you will very quickly run out of storage space.

What is separating these two scenarios is that a lot of data is (deterministically) derived from source inputs, but in the current state of technology it's impossible to determine what is derived data which can be easily recomputed and what is essential data without which the current state could not be reached.

I happened to just be reading this article on unikernels which coincidentally is something that would help get us a bit further along in that direction: http://queue.acm.org/detail.cfm?id=2566628

P.S. When thinking about this one key concept is to realize that you could can do things like implement a 'torrent file system' where a file is just a torrent hash which can be requested and signal when (parts of it) become available.


in the current state of technology it's impossible to determine what is derived data that can be easily recomputed and what is essential data without which the current state could not be recomputable.

Can you elaborate more on that statement? I see it as an expensive problem, but not impossible.

If you have a completely deterministic VM/instruction set, you can recompute any function output from any set of inputs. If you have non-deterministic input (network, input devices, randomness, etc) you can store those inputs to replay at a later time.

I don't know if it's always necessary to store non-derived data. Do you need to save all keystrokes? All mouse moves? All network packets? Probably not. If those network packets just result in displaying a graph on the screen, they can probably be thrown away. Maybe you can throw away the headers and just compute based on the packet data. Or maybe you can simply store the fact that a stream of data that matches a cryptographic hash was received. It depends on what function receives that non-derived data.

So I think the difficulty would be designing a system that would isolate derived data and replayable/recomputable/refetchable data, and the processes that compute that data -- and do so with reasonable efficiency. I think this could be done at the process level rather than the instruction set level (though you would have to have a restricted instruction set -- i.e. limited floating-point).


At the process level there are already plenty of programs that work this way and it's exactly the kind of thing the Clojure ecosystem is geared towards.

That's why the Purely Functional OS is essentially a thought experiment in what it would take to extend this to the entire operating system.

When I said "impossible given the current state of technology" I meant exactly what you said in your last paragraph though: impossible without designing an OS that isolates derived data.

I believe such a system will come in the form of persistent data structures coupled with reactive programming but, while I believe it's inevitable in the long run, extending that concept all the way to the OS will be quite the challenge.



In my view, the position "get a lot more cautious about source vs. derived data" sort of is a mystery .. and I can only see one thing: temporality. Time-value.

If the CPU has 'now' to consider, and now is Time value [t] and if [t]'s are all there is between states, then 'think about [t1 .. t2]' should be a standard op.

And isn't this the point about immutability, that it has to have had a Time value, to be of any value at all, anyway, to the user?


We actually solved that problem and purely functional distribution exists, its called NixOS


Very cool! "If you're the only one with an idea, it's probably a bad idea" :) Feel free to e-mail me, would be lovely to join forces.


A garbage collector that knows that all values are immutable will be rather interesting, I think. Typically, a garbage collector will stop the world (i.e. halt execution) to do heap defragmentation of the old generation. When all values are immutable, though, you can defragment the heap by copying a value to another fragment and just swap the internal pointer to the value.

This might actually have significant performance benefits. When everything is immutable, GC can also be inherently incremental. You just do your defrag copy faster than new object creation. Then you cam limit your stop-the-world to a small "catch-up" so your copy-from can become a copy-to. Incremental GC nirvana!


I don't see how immutability makes much difference to garbage collection. Garbage collectors don't look at the value inside memory they are collecting, only whether something live is still retaining that part of memory. Whether the value is mutable or immutable has absolutely no bearing on the performance or logic of the GC. Either something is still using the memory, or it is free to collect.

Also you've just described existing GC systems - The Java G1 collector is very similar that (it's a lot more complicated though).


Generational GC usually depends on write barriers to detect creation of references to newer generations inside older generations.

Immutability prevents the mutation of older generations, so no new references can occur.

So I expect generational GC to be easier to implement. I wouldn't expect it to be faster or slower though, because programs will need to be written differently to cope with the limitations on mutability. O(1) algorithms will likely be replaced by O(log(n)).


In the case of Clojure, log is log base 32, so it's close enough to O(1) as to make no difference in most circumstances.


Many immutable algorithms replace arrays with trees. Trees are pointer-heavy, and pointers are less cache-friendly than arrays.

Memory access is often over 100x the cost of an L1 cache hit. It doesn't take too many of those to make a big difference if you're CPU bound.

My comments are general, I'm sure Clojure has access to arrays where necessary for interop at least.


Again, true, but the big problem I'm facing isn't raw performance so much as it's guaranteed latency.


But the constant factor is Z, a very large number. A number so large it has a color and name. It pays taxes, has lunch and sees movies on the weekend.


Not true in the context of dynamic languages. The constant factor for persistent maps in that context is quite small.


The actual GC of freeing memory is indeed possible (and common) without stop the world. Heap defrag is another story, though. This involves moving objects to different locations in memory, and if the memory is mutable, measures like stop the world is required since multi-byte move is not atomic.


Doesn't matter. In an immutable model, the old copy is guaranteed not to change, so you can safely copy it non-atomically. (That's one of the big benefits of immutability.)


Which makes really good incremental collection much easier to write


But not faster.


True. But what I find myself wishing for is not more throughput, but seemingly pauseless operation.


Haskell's generational garbage collector is able to use some tricks to take advantage of the fact that values in Haskell are (almost all) immutable:

http://www.haskell.org/haskellwiki/GHC/Memory_Management


> This might actually have significant performance benefits.

Not really. On the other hand, it makes the implementation of GCs much simpler.

(immutability semantics also tends to make memory churn much higher, so what little gain you get from a simpler implementation is often lost in the GC having to do more work)

Also, it's not like these things are unstudied FFS.


It's not just GC, but virtually pause-less GC that is the possible benefit. Yes, these things are studied, but I would challenge you to find a free high level language implementation with GC suitable for soft real time. Right now, I know of Erlang, which isn't the fastest language and running Clojure on the proprietary Azul VM.


I'm not sure what you're trying to say. If no such implementation exists, even though there are a number of languages predicated upon immutability, one would conclude "virtually pause-less GC" isn't such an obvious or easy "possible benefit" of immutable structures.

Unless you're using refcounting: since immutable structures can't create cycles, a refcounting GC won't need a cycle-breaker, and thus won't need to ever pause. But you'll be using throughput (especially for allocation I'd guess).

Also not sure what your point is re. Azul, immutability is not what enables it since Zing is first and foremost a java VM.


I was thinking about refcounting, actually, but not exclusively. My point is that there aren't that many systems where you can build in a high level language, deal elegantly with concurrency, but still have really good latency guarantees. This project might improve on that.


> immutable structures can't create cycles

Perhaps I'm misunderstanding something, but doesn't "tying the knot" in Haskell do just that? (Create a cycle from immutable structures.)


Tying the knot is possible only if you have pervasive laziness.


Which is precisely why an immutable OS that's Clojure all the way down might be desirable.


I don't get if you imply that Clojure has pervasive laziness or not. The contract of lazy sequences is too loose to make tying the knot not a brittle hack -- and I think it's a good thing.


I'm referring to the article. The Clojure inspired OS would, I suspect. If no one has referred to the unrefined parts, what would be the problem?


"When all values are immutable, though, you can defragment the heap by copying a value to another fragment and just swap the internal pointer to the value."

Wouldn't that turn shared objects into not-shared objects? Theoretically, that doesn't make a difference in a system without mutable data (assuming you leave out 'identity check', too), but in practice, it likely will blow up memory usage tremendously.

Edit: I think the way around this is to replace the moved object with a "this object moved" object. The trick, then, will be to know when it is OK to discard that "this object moved" object. That probably is the case after a full heap scan.


http://en.wikipedia.org/wiki/Cheney's_algorithm Is quite common in systems and uses "forwarding pointers".


Agreed. The potion/QISH implementation of the two-finger cheney loop is extremely efficient even without register values (volatile), typically 3ms pause time on a mid-sized heap. Compare that to typical java or ruby pause times, starting from 25ms to typically 150-250ms. You need more memory though, as in most lisps it's using copying, i.e. compacting, not just simple mark&sweep. Incremental is doable, but you need to add GC state then, which is a bit tricky in threads. Actually libgc is pretty good.



I don't really know how feasible this is without pretty significant performance impact (esp. memory usage), but as a research project, it sounds fascinating. Go for it.


> don't really know how feasible this is

And I suspect neither does the author. I looked at the previous project (OpenGL Renderer) which comes in at 250 lines of code and barely any functionality. The current project is has no code at all.

I'm all for sharing ideas, but this is somewhat lacking. The design document touches on the memory model, but ignores all the other problems an OS has to solve (scheduling, networking, disk IO...).


The author is probably not even aware of those problems, and will hit a wall when he realizes his new OS has to deal with one of those issues -- and that's assuming he even gets far enough in his project to deal with scheduling or IO. Writing your first operating system is an especially challenging task even when you're not trying to revolutionize the traditional OS model.

I don't suppose this project will actually go anywhere, though I'd be pleasantly surprised to be wrong.


The author could focus on an existing embedded RTOS and fork from there. Enough cherries to pick one.


But is he going to find one that follows his "all has to be immutable" approach? I somewhat doubt it. Do you know of any existing OS that works like what the author describes?


The reason I posted the idea was to start with a system that is working and where you can forego the filesystem and other parts that are typically found on a full blown OS. It's not uncommon for an RTOS to dismiss the GC and leave it to the programmer.

For example, there are discussions in the RTOS world to take a closer look at Rust (Google embedded Rust) which is why the terms "new OS", "GC" and "experimental" triggered my response to look over there for a starting point.


That is a very good idea. So far I've only been able to figure out EFI booting basics.. Which is the least interesting part. Do you have any concrete suggestions?


My recommendation would be to modify an existing OS, like MINIX, to have two types of processes: mutable and immutable.

That way some (not all) of the potential advantages can be tested without having to start from scratch.


That is very interesting. It would mean a lot less work on mundane things like file systems too. Will investigate.


I also like MINUX's message passing style of interprocess communication.


My thought exactly :)


Why make the system language a Lisp instead of a language that is by default immutable? I'd think Haskell is a shoe in for something like this and you'd have a head start with House: http://en.wikipedia.org/wiki/House_(operating_system)


Personally I'm more of a fan of immutability than purity, which is why I haven't considered haskell. As mentioned in the post, though, I'm not sure if the language should be static or dynamic or something else.. And I don't really want to invent my own language either. Food for thought :)


Pure functions would let many of your functions become immutable :)

EDIT: Well... not really I guess. But idempotence has similar properties I think.


I don't know much about Clojure, but I understood that it is a lisp and default immutable... so wouldn't that work?


Just curious have you started looking into some of the LISP machines of old? You might be able to get a head start by studying their structure (and of course their decisions):

https://en.wikipedia.org/wiki/Lisp_machine


There are definitely interesting things there, but fwiw Lisp machines weren't based on immutable data structures. Common Lisp in general has an ethos of being multiparadigm and using imperative updates whenever convenient or performant. Functional constructs are probably used more than in C-like languages, but not exclusively, and are probably a minority approach in "production" Common Lisp. That's one place where Clojure differs quite a bit from CL; it sort of combines the "Schemey" preference for functional style, non-destructive functions, recursion, etc., with the CL ambition of a "batteries included" library and industrial-strength tooling.

However if you look at the first-order similarity as not being immutable data structures, but instead the idea of an OS entirely built within a managed runtime, the Lisp Machines and Microsoft's Singularity OS (C#) are two interesting examples.


Op here, thanks to both of you :) I would very much like to expand prior art to more than just Linux and Clojure.


Related, a purely functional package manager, Nix, and a Linux distribution based around it: http://nixos.org/nix/


I see a number of projects sharing similar traits:

  - DVCS
  - btrfs,zfs
  - rethinkdb
  - persistent datastructures (cons cells, fp trees)
  - nix package management, virtualenv, containers
  - react/pedestal (pushing ux delta upgrades)


Mutability/immutability goes pretty deep philosophically... all of those projects are basically built on persistent data structures, but persistent data structures can be seen as just one type of "making time explicit". In other words, with mutability, a variable (or data structure) takes different values at different times, while with immutability, you can still have time but you name each version through time explicitly. (Values vs. variables, etc.) Same concept exists in e.g. single static assignment (SSA) in compiler IR, or register renaming in high-performance processor design... pretty powerful concept.


As an aside: I'm pretty mind blown that I've never once considered immutability as "making time explicit". That's an extremely apt comparison.


How could I forgot SSA. The idea here is what about re-expressing things by pushing immutability at lower level as a common base.


An operating system whose main goal is not performance is forever going to be a toy of professors. Linux is a mess compared to the beautiful code that would comprise this OS, but it's a mess that runs fast on real hardware and let's people get stuff done. That matters.


An operating system whose main goal is not performance is forever going to be a toy of professors.

Be careful! That statement was once true of programming languages.

Over time, larger and larger numbers of people relaxed their expectations for performance in exchange for benefits like greater safety. Simultaneously, implementors invested in optimizing the performance of languages previously considered "slow."

Operating systems may well wind up with the same dynamic.


But only because hardware got much faster, with larger resources (memory, disk). I just did a Stupid Benchmark, compiling this code:

    /*
     * Standard 'C' demo... Say hello to the world
     *
     * Compile command: cc hello
     */
    #include <stdio.h>
    
    main()
    {
      printf("Hello world\n");
    }
(I know, not completely valid ANSI C code). On my home system (32 bit 2.6GHz intel gobs of RAM yada yada) it took GCC 0.2 seconds to compile and link the program (as a static program---by default it did it in 0.05 seconds). On the system I grabbed the code from (an operating system for the 8-bit MC6809) it took 35 seconds. And that was on a simulated MC6809 running on said 2.6GHz system, which is still faster, at approximately 50MHz, than a real MC6809 system of 1MHz (I thing you could get a 2MHz MC6809, but not much faster than that).

Even a state-of-the-art computer in 1980 (say, a Cray supercomputer) would have a hard time keeping up with <a href="http://www.chrisfenton.com/homebrew-cray-1a/">hobbyist system of today</a>, and it's because of this increase in computing power that we've been able to have the "toy languages" of yesterday be useful languages today.

But given that CPU speeds have leveled off, the easy performance gains are pretty much gone these days.


On the surface, you wouldn't think immutable data structures would offer any significant performance gains. But there are a lot of cool techniques and unforeseen consequences that emerge.

The clojure library "om" is able to use reference equality (instead of a deep comparison) to check for diffs. This results in a 2x or 3x performance gain over mutable data structures.

So I agree with you, this will likely just be a toy OS. But if his design demonstrate huge advantages, the ideas could be adopted by mainstream operating systems.


Forgot to source my info on the "om" library. http://www.infoq.com/news/2014/01/om-react


As I mentioned elsewhere, such an OS could have important performance benefits. GC could be made to be incremental at a fundamentally new level. In fact, incremental GC could be nothing more than a series of defrag copies that proceed at a rate just faster than new object creation. Such a system would rock for writing real-time systems.


Important performance benefits compared to what? Systems that are very performance sensitive typically don't use garbage collectors to begin with. They instead rely on techniques to avoid generating much garbage in the first place.


You're making my point for me here. This kind of technology might make it possible to write even more performant soft real-time systems but still have the productivity benefits of GC.


I don't think I am. Trading performance for productivity is completely different discussion. It sounded like the claim here was that immutability in the OS (whatever that means) might result in a system with better performance.

We have programming languages where everything is immutable (Haskell). This results in programming techniques that generate a lot of garbage. We have not seen that this results in programs with better performance than those that do not generate garbage to begin with. Why would it be different in an operating system?


Trading performance for productivity is completely different discussion.

The point is that you could possibly buy more for less in this trade off. It's not just GC. It's a close to pauseless incremental GC that might become available.

It's not so much that it has to be in an OS, though it seems it would be better in an OS, however.


I think this is becoming less true: people are willing to trade off performance for safety and flexibility, given the right circumstances. For example, running everything inside of virtualized Linux instances is less performant than running in bare-metal Linux, but virtualization is nonetheless taking off like crazy.


> but virtualization is nonetheless taking off like crazy.

I think that's more due to hosting providers being able to sell far more "machines" than they have hardware for.


Ok, so people are willing to trade off performance for safety, flexibility and price.


We're running everything on our own fully owned colocated servers, yet we're running everything in virtualised environments. Private clouds are attractive for many of us because of the management benefits of being able to use a VM as the unit of deployment. It also makes sense to increase isolation, as colo space is expensive enough to justify servers that are much more powerful than many of the apps we run needs (our typical new setups start at 16-24 cores these days, and I expect that to be 32-48 cores by year end), so we want to co-locate many apps on the same servers but don't want them to able to interfere too much with each other.

In fact, we have some systems where things are even deployed in containers inside VMs for various reasons...


Sometimes 'toy of professors' (research) is the point.

Not everything has to run a social network and not everyone is working on a twitter clone.


Oh I'd love to see him do it, because it would be a great research platform. Don't get me wrong. But that didn't seem to be the goal. Maybe I just interpreted the article wrong.


Isn't Twitter written in Scala, a programming language designed with immutability in mind?


Yes, but his comment was about an OS needing to be pragmatic and designed with performance in mind (which Scala was), not about it not being immutable.


The goal of Linux was not performance, it was to create a free OS. The fact that it has become a high quality performant OS is a side-benefit (driven by external commercial factors, much later in its development timeline).


I'm not sure that was even the goal. Seems Linus just wanted to create it for the hell of it, didn't think it was going anywhere, and released it as free software just because... From everything I've read of the history of Linux it seems it was mostly an accident (a pleasant one) of history. (I use Linux BTW)


That.

Also, Linux found a niche, and stopped being a toy very early, much before people started really caring about performance.


Finding new ~structures is important, see the issues with X vs Wayland. By re-ordering things you get far better performance even on limited systems (raspberry pi for instance). Sure you lose other properties but that's part of the compromise. I'd love to see a similar effect on the whole system by going immutable first. With added that it could give a smaller system, easier to understand, secure, change.


That's as silly as the claim that no one will use relational databases, because only humans can properly optimize queries and transactions are too slow to be useful.


That doesn't really matter if the idea is to experiment.


Awesome! You should have a look at Urbit: http://www.urbit.org/


There appears to be no "About" on that site. Who knows what on earth it is?

On the left sidebar, the "documentation" link is to a video. No thanks, I'm look for a sentence of text.

The next link down is a 404 error.

The community link is for insiders...lots of comments about people leaving the building,whatever that means.

Bottom of page, how to install. Why am I installing something when I don't know what it is?

Sites that expect you to already know what they are about, and do not deign to give so much as a sentence of explanation, are extraordinarily frustrating.


Sorry our site sucks. We released accidentally and are more or less in "semi-stealth mode." You probably shouldn't be interested yet. Unless you like playing with semi-broken stuff. When it's not broken we'll let you know.

Urbit is indeed a purely functional OS, not super dissimilar to the OP's project. And it works at least well enough to run a chat server on its own network (which generated all those logs with "leaves the building"). But we ship no code before its time and we probably shouldn't even have a site up.

You'll enjoy the video though - it has good music, at least.


That's fine -- but please add an "About" link that says what "urbit" is.

That lack of explanation, that expectation of speaking to insiders, is unfortunately very common on web sites, not just yours.

Please add the About explanation even if you decide to take the site offline. At least that way, it will already be ready, later, when the project is further along.

I was just reading about the defunct House functional OS yesterday, coincidentally, so I'm interested in the general topic.


Thanks again for the advice.

Do you have a link to House? I wish people naming projects would realize that the Internet is dead and all we have is the Googlenet...


I totally agree, it's sad when interesting links die, and it's sad when a search term is extremely generic.

Here's what I've got, pasted out of my notes:

House has been successfully implemented in Haskell, I'd assume that Haskell would fit your criteria. Admittedly, House is an experimental OS, rather than production one. But they have managed to implement everything from kernel though network stack, to a rudimentary GUI. [link broken, see search results below] 2005 http://lambda-the-ultimate.org/node/943

House is a demo of software written in Haskell, running in a standalone environment. It is a system than can serve as a platform for exploring various ideas relating to low-level and system-level programming in a high-level functional language. More details are available in our ICFP 2005 paper: A Principled Approach to Operating System Construction in Haskell. http://ogi.altocumulus.org/~hallgren/ICFP2005/ http://programatica.cs.pdx.edu/House/

terse wikipedia article on House http://en.wikipedia.org/wiki/House_(operating_system)

short LTU thread on functional systems programming http://lambda-the-ultimate.org/node/943

...oh, also there's a book, "Unix System Programming with Standard ML" by Anthony L. Shipman 2001

Was free online, link is broken, available via archive.org:

http://web.archive.org/web/20040531113417/web.access.net.au/... http://web.archive.org/web/20040531113417/web.access.net.au/...

I just found that, have not read it.


House is an OS in a very different sense than Urbit - the "boots on bare metal" sense. The core of House is a monad which represents the hardware state of a common Intel box.

Urbit is intended to run virtually in the cloud and is an OS only in the sense of "stores your data and runs arbitrary programs". As a cloud computer its "function" is simply (old state, packet in) -> (new state, packets out). For communicating with the host OS this generalizes to "event in" and "actions out." Eg, I got an HTTP connection so I send an HTTP response. So not only isn't it done, it isn't even very smart... but it is a function and it is an OS.


Your video is awesome :) It warped my mind a little bit (in particular the "let's execute some program written next week" part--or did I imagine that), and also made me dig into Borges' works again ;)


There's a long-winded description in the first blog post:

http://www.urbit.org/2013/09/24/urbit-intro.html


Complimentary Loper OS link, too: www.loper-os.org/?p=284


Sort of a sidebar: The term "atom" doesn't seem to have the same meaning in all languages.

Atoms in Clojure are reference types, and they are mutable in a controlled way (they can be changed to refer to another immutable value).

In Erlang, an "atom" doesn't refer an immutable value, it is the immutable value. An Erlang atom by definition cannot ever be any other value.


I'm not sure how this would look very different from current pure functional languages. Think about programming in Haskell. Everything is immutable but you can compile down to machine level binaries. So at some point the programmer is presented with an abstract immutable model and under the hood some runtime does real pointer arithmetic and all. Even garbage collectors could be optimized already for knowing that the programmer has no knowledge of memory locations.

If this kind of OS encourages hardware changes, that's one thing. But aside from performance, it probably won't look too different from current FP languages.


He's not designing a language; he's designing an operating system.


The GHC RTS (used in most Haskell programs) is practically a mini-operating system already.


I haven't written any Clojure, but I have written a good bit of Erlang. My understanding of the Erlang Virtual Machine tells me that it solves a lot of the problems you describe regarding immutable data. For example, per-process garbage collection due to immutability, message passing via copying data, etc.

I think you would be well-served to experiment with and learn some Erlang to help inform your design. In fact, Erlang often feels to me like an operating system due to the independence of processes (and the multitude of tools built on top of it, but that's not directly relevant here). I've often dreamt of an Erlang Machine like the Lisp Machines of the past.

As an aside, Erlang's primary goal is fault-tolerance. It's other properties, such as immutable data, message passing, and functional properties were all design decisions made to achieve this goal. My point is that your OS could be very well suited to fault-tolerant systems.


I'm an Erlang noob, but afail the model is pretty different. I'm imagining a shared-everything system, where immutable values are freely passed around with no copying. Erlang is more of a shared-nothing system. Also, Erlang requires multiple machines to shine, but an OS is more about managing one machine.

I do think message passing have a bright future, though, and my OS will not work at all in a shared-nothing manycore system..


I see. You are correct, Erlang definitely falls on the shared-nothing side if you ignore some uses of ETS and ref-counted binaries (binaries above 64 bytes, I believe).

I disagree with the statement "Erlang requires multiple machines to shine," but my motivations, and my definition of "shine," are likely different from yours. Erlang only requires >1 machines if you are creating a system with proper fault-tolerance. In my opinion, it has good SMP support and it performs well for general-purpose applications. You wouldn't use it to implement matrix multiplication, though :)

Anyway, the gist of my post is that the lessons learned from writing Erlang programs might be useful in your work.

Good luck! I look forward to hearing about your progress.


I once skimmed Armstrong's dissertation, it was mentioned how the JVM is bad for fault tolerance, because it's possible for a run-away thread to break the entire VM. Apparently, the Erlang VM manages isolation better, in such a way that it's impossible for one process to take down the entire system. That is definitely something I want to bring along to this OS :) I'm toying a lot with Erlang these days, to ensure Clojure and Linux isn't the only prior art..

And you're right, "shine" is a generalization, I actually had fault tolerance in mind when I wrote that.


But garbage collected memory of immutable values and atoms, in hardware?

Hardware transactional memory might help here. http://en.wikipedia.org/wiki/Transactional_memory#HWIMPLEMEN...


This idea reminds me of what Go is doing with their channels to solve concurrency issues: they're essentially sharing data by copying the data.

Copying data is slow and that's why Go's channels aren't the fastest solution, but they're less complex than dealing with the traditional concurrency issues.

I've written a toy kernel and I just can't imagine copying all process data because 1 little thing changed. However, there's no reason not to pursue this idea, as it may prove to be useful in certain cases. If real-time schedulers have their place, why wouldn't such an OS have its uses? I think Go (and other projects) have proved the concept works, so maybe it's time to see an OS?

Best of luck to you and I hope to hear good news in the future!


> they're essentially sharing data by copying the data.

No. Go only copies the immediate values, if you send a pointer the pointee is shared for instance. Go's concurrency is shared-memory.

Erlang copies (almost[0]) the data sent between processes, and has process-local heaps.

[0] binaries above 64 bytes (IIRC) are refcounted and shared.


Thanks a lot :) I cannot imagine that the web will ever work on this OS.. How do you implement a mutable system (JavaScript) with an immutable core? So it's definitely not for everyone.


Is it copying all process data? Or copying a reference to the process data and then performing copy-on-write when a process makes changes locally?


When you fork() a process you don't have to copy all process data either, thanks to copy-on-write semantics.


I've always been baffled by Clojure's view on state. It seems like just a rename of the concepts present in most languages today. Ie atom is mutable variable and value is an immutable value. What's the novelty here?


The sole fact that one has to reach over themselves and make a great deal to invoke it when thinking within Clojure's mechanics - they become neither cheap to mentally fiddle around with, nor actually implement within.

(FWIW, though, Clojure's novelty to me was an STM system that I could get up and running without a night's worth of tweaking, and otherwise fall safely into a great deal of attention if something were to go hilariously wrong - most of the other solutions around "back then" seemed to regard themselves as research curios lagging behind in fit and finish.)


You mention that one of the benefits would be that memory can be shared across processes without any defensive copying or protection semantics, but isn't the entire idea that immutable values are a special type of defensive copying and protection semantic?

I imagine that this would be useful if doing many calculations on readonly values, but would take a performance hit for readwrite operations.

I'm not one for purely functional programming, but this seems like an interesting concept. I'm interested to see where it goes


Interesting definition of immutable values, and a valid one. My perspective is different. An immutable value has a value add more than that of avoiding defensive copying, such as structural sharing. When you add a key to an immutable map, the new map can point to the old.

It has become more and more common to model systems with append-only as the only form of write (Event Store, Datomic), so readwrite can probably be avoided in some cases. And things like IP packets certainly makes sense to represent immutably - anything else is a bit of a lie :) But there is certainly a risk that the trafeoff of immutabulity becomes too costly for an OS..

Also, the system language will not be pure, like Haskell. Just immutable values, like Clojure :)


Maybe consider Fressian for EDN-like bytecode format...


Ah, thanks. When I said EDN, I meant Fressian :)


I really doubt the practical usefulness of such an operation system other than pure fun and skill development (and maybe some tools that fall out at the end...).

Isn't the main purpose of an OS to bridge between the applications and the hardware? The hardware is pretty accurately abstracted as sth. with a state and input/output streams.

What is the advantage of abstracting the hardware as sth. without a state?


I really doubt the practical usefulness of such an operation system other than pure fun and skill development (and maybe some tools that fall out at the end...).

Isn't the main purpose of an OS to bridge between the applications and the hardware? The hardware is pretty accurately abstracted as sth. with a state and input/output streams.

What is the purpose of abstracting the hardware as sth. without a state?


I really doubt the practical usefulness of such an operation system other than pure fun and skill development (and maybe some tools that fall out at the end...).

Isn't the main purpose of an OS to bridge between the applications and the hardware? The hardware is pretty accurately abstracted as sth. with a state and input/output streams.

What is the purpose of abstracting the hardware as sth. without a state?


Is there a list of research / alternative operating systems anywhere?

I don't know how an OS would qualify to get on the list other than not DOS, Windows, OSX (or any of the Apple OSs), Linux (especially not 'ubuntu with a different DE'), or the BSDs (i guess there are exceptions if they're on a toaster).


I was making a list of a few I knew, but it would be better to just check wikipedia: https://en.wikipedia.org/wiki/Comparison_of_operating_system...

Those are more serious kernels I think. There are also a ton of research-related kernels. I guess you can find a few for some programming languages.


Here is information and links about some unusual operating systems, including several discussed above.

https://github.com/jon-jacky/Piety/blob/master/doc/precursor...


We already have a purely functional linux distro that is immutable and stateless, called NixOS


'Clojure has transients, which basically means "inside this function, use a mutable value under the hood while building up the value, but make it immutable before it's returned"' is this essentially a ... closure?


Transients in Clojure is an added feature to the built in immutable values. Essentialy, they are immutable. But Clojure does have mutable-ish versions you can use to build up a semi mutable transient, and when you're done, convert it into an immutable value before it's returned.

The key is to make it fast. Mutable -> immutable is not an O(N) copy.


Can someone tell me where I can find a primer on immutable vs mutable, state etc? I have tried to piece together an understanding from wikipedia etc but I'm not really "getting" it.


So how do you load this into a VM?


The makefile builds a virtualbox image. But it doesn't do anything interesting yet. I didn't submit to HN, the project is a long way away from being demoable and testable.


how would a user interact with such a system?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: