
Lisp-based OSes - fogus
http://linuxfinances.info/info/lisposes.html
======
rst
The Unix-style process model has virtues that the OP doesn't seem to grok.
It's sometimes helpful to be able to restart one server from a completely
clean memory image without taking the rest of the system down.

Beyond that: the OP sez that "if the whole system is constructed and coded in
Lisp, the system is as reliable as the Lisp environment. Typically this is
quite safe, as once you get to the standards-compliant layers, they are quite
reliable, and don't offer direct pointer access that would allow the system to
self-destruct."

But as I write, we've got two Lisp posts on the front page, and the _other_
one[1] is about the performance of code compiled with

    
    
      (declaim (optimize (speed 3) (safety 0) (space 0)))
    

That is --- "omit safety checks, just trust me that my array accesses are all
in bounds and I'm getting the types right." Code compiled this way is _not_
inherently safer than C, and has to be coded up with equal care.

So, at the very least, the "quite safe" guarantee applies only to code
compiled with full safety checks, which typically come with a very large
performance hit.

[1] <http://news.ycombinator.com/item?id=2192629>

~~~
neutronicus
The other half of his point was specially tuned hardware - the author seems to
believe that the type-checking, gc, etc. don't cripple performance the way
they do on x86.

I don't know if he's _right_ , but that seems to be his point.

~~~
ohyes
The idea is that you would have bits in the hardware dedicated to type
checking and garbage collection. The example being, that in the assembly
language/machine code, you may have a single arithmetic '+' operation.

Determining which hardware path to use to add two numbers would be done in the
hardware itself. Check the type bits of the numbers and feed it into my ALU.
Compare this to an x86 lisp, or compiled C, where 'type' of a 'number' is
determined by the assembly code instruction that is used on it.

This isn't just a performance improvement, it is also an improvement in the
safety of the dynamic language.

There are a lot of different things that you could do for garbage collection.
You could have in-hardware reference counting, or 'dirty' and 'clean' (or
color) bits, for a mark and sweep collector, or 'generational' bits for an
ephemeral garbage collector.

The idea is that any time you take something out of software, and put it into
specialized hardware, you should get a performance improvement.

This doesn't mean that the lisp on a chip would be faster than C on a
comparable x86 chip, it means that the things that make lisp (and other
functional languages) safer and easier to use would be supported in hardware--
therefore not slowing things down as noticeably.

~~~
derleth
> it means that the things that make lisp (and other functional languages)
> safer and easier to use would be supported in hardware-- therefore not
> slowing things down as noticeably.

Or, at least, forcing every language implementation on that hardware to use
the same safety mechanisms, making some apples-to-apples benchmarks
impossible.

It would be interesting to see what a C implementation for that hypothetical
modern Lisp machine (CADDR?) would look like.

A close parallel is AMPC, which compiles C to JVM bytecode.[1] The vendors say
it's standards-compliant, and I actually am pretty sure it is, but it doesn't
do a lot of the nonstandard things C programmers have come to depend on. For
example, the 'struct hack', where you pack data of multiple types into a
struct and proceed to index into it as if it were an array (usually an
unsigned char array), flatly does not work, due entirely to the runtime type
checking done by the JVM. This always seems to lead to major debates over
whether it's a very good compiler.

[1] <http://www.axiomsol.com/>

~~~
sedachv
The 'struct hack' is when you leave the type of the last member of a struct
undefined (effectively making structs variable-sized). This is actually not a
problem for runtime type checking, and is C99 compliant.

What causes problems is casting pointers to ints and back, and casting all
other crap to chars. This is not standards compliant.

Casting ints to pointers will never be type-safe, but one way to get around
that is to just ignore the cast, and overload arithmetic operators to work
correctly on pointers - the pointers will carry around their type info, and
everything should work ok.

Casting other crap to chars will never work because it interferes with the way
the other crap has its type encoded. Luckily in most cases this casting is
done to perform I/O, where you can also just ignore the cast, and specialize
the lowest-level I/O functions to dispatch on the actual types.

The moral of the story is that you should basically ignore all the line noise
the programmer produces about types, and look at the actual objects. This is
exactly how Java works, btw.

WRT hardware tagging and type checks, there's really no reason to do it on a
byte-addressed superscalar processor. If you look at 64-bit Common Lisp
implementations today, you'll actually find that they use only about half the
available tag bits in each word. The only thing that needs to be boxed is
double-floats.

------
danking00
DARPA recently assigned a grant to Olin Shivers, along with members of
Northeastern University's and University of Utah's faculty, to "seek to
develop bug-free, secure technology using brand-new programming languages that
enable programmers to write large, complex software."[1]

Around campus, it's been described as an opportunity for Shivers et al. to
write a Operating System built completely with functional languages, from the
low-level drivers up to user space tools and new programming languages.

My personal thoughts is that it'd be awfully cool to have something like
"Emacs as a real OS." Perhaps it is lack of knowledge and self-confidence or
the limited nature of Emacs, but I find it way easier to change the way Emacs
works than to change the way the Linux kernel, GNOME, GNU tools, etc. work.

[1] Page 12 of <http://www.ccs.neu.edu/news/CCIS-Newsletter-Fall-10.pdf>

~~~
pnathan
I think it would be interesting to chop a *macs into an operating system. I
would approach it in an iterative fashion with these initial goals:

\- Replace elisp with Common Lisp

\- Build os-level threading support

\- Build a hardware abstraction layer / target a 'bare' machine.

That gets someone a 'ways' towards a traditional Lisp OS.

I think one of the big questions that arises in for a modern Lisp system is
the design of of multiple processes and multiple users.

~~~
danking00
Personally, I'd rather toss Lisp entirely and go with Scheme, but something
definitely needs to be done about elisp. There's a small group of undergrads
here at NU hacking on Edwin, a Scheme based Emacs clone originally developed
at MIT. They're (we're, I suppose) porting it to Scheme48.

Those last two would certainly be important as well, but something I have no
context on.

------
mark_l_watson
I bought a Xerox 1108 Lisp Machine in 1982 and loved it for the great display,
windowing system, and awesome InterLisp-D development tools.

That all said, I prefer the modern world of general purpose operating systems
with good commercial (Franz, LispWorks, etc.) and free (SBCL, Clozure,
Clojure, etc.) Lisp development environments.

~~~
bane
On a whim, have you ever looked into any of the LISP machine emulators?

~~~
mark_l_watson
Yes. I found a simulator for the 1108 bundled with a NLP package and ran it
for an hour, then deleted it. Not the same experience as using my old 1108.

------
stcredzero
Combining the Unix style process separation with mechanisms in the language
might be useful.

A problem with such environments might be the availability of widely used
applications like Google Chrome or Firefox. A way around this might be to
expose the language's Virtual Machine as bytecode or some other intermediate,
and target C compilers to that. This way, an entire POSIX environment could be
built on top of the Lisp based OS, which would be more comfortable for many
users, yet still offer an omnipotent, seamless access to code everything "from
the bare metal-up" in Lisp.

~~~
thmzlt
I recently found this Scheme to C/JVM/C# compiler: <http://www-
sop.inria.fr/mimosa/fp/Bigloo/>

~~~
igrekel
I've heard people having good results using Gambit to generate C code from
scheme to write software for unusual platforms.
<http://www.iro.umontreal.ca/~gambit/doc/gambit-c.html>

------
1337p337
Surprised no one has mentioned MonaOS. It's a small, Scheme-based, x86 OS.
It's got a lot of parts written in C and assembly and is occasionally buggy,
but is fun. It is largely a one-man show, but it's already fairly functional.
There are images at <http://monaos.org/> and the github repo is at
<https://github.com/higepon/mona> .

------
limmeau
I'm surprised that none of the efforts listed seem to target hypervisors like
Xen or KVM.

A few weeks ago, Azul's VM was on HN, and its GC benefits from tight
integration with the virtual memory system.

~~~
sedachv
You can get Azul's Linux patches here:

<http://www.managedruntime.org/>

From what I understand, the main win is that they use nested page tables to
let the JVMs handle page faults directly, which is how they implement high-
performance read barriers.

I don't know a lot about garbage collection, but read barriers seem to be the
essential piece for implementing real-time (which really should be called
"non-blocking") GC.

There's a good discussion on LtU about this: <http://lambda-the-
ultimate.org/node/4165>

[edit] I should mention how this relates to Lisp operating systems: if you
replace the virtual memory system with a garbage collector (ie push the GC
into the kernel), you can get the same effect but without needing nested page
tables/VT-x/RVI, even for user-space processes.

It should also be more efficient and waste less memory on fragmentation than
going through a dumb VM.

------
defroost
While not a Lisp OS, StumWM <http://www.nongnu.org/stumpwm/index.html> is an
interesting project. I run it on Debian, and I like to pretend I using a Lisp
Machine.

------
nwmcsween
Unix as in POSIX days are either numbered or is going to be perpetually hacked
into something it can't do without issue. Distributed computing is becoming
more of a norm with consumers having many devices. A look into what future
operating systems might look like are Midori or Inferno (which was way ahead
of it's time) or any other vm based operating system.

------
pnathan
Does anyone know if there's any sort of open source project for a modern Lisp
OS that is ongoing?

~~~
sedachv
<http://common-lisp.net/project/movitz/>

Frode V. Fjeld doesn't seem to have much time to hack on it anymore, but the
mailing list is active and you can hack on it today.

------
Stormbringer
If there really was a tool (whether that be hardware or software or a
combination) that for a couple of thousand dollars would genuinely make you
10x more productive, you'd be an absolute fool not to run out _now_ and buy
it.

Since people aren't willing to do this it goes to show that the claims of the
Lisp junkies are just pipe dreams.

If programming on an all Lisp environment really was 10x more productive even
a ten thousand dollar price tag would be chicken feed.

Lisp fans like to talk it up about how great it is, but at the end of the day
are unwilling to put their money where their mouths are.

~~~
justinlilly
This is only the case if productivity is your only concern. There are also
considerations such as support, security, and familiarity.

