Hacker News new | past | comments | ask | show | jobs | submit login
LispOS: Specification of a Lisp operating system (2013) [pdf] (metamodular.com)
140 points by molteanu 7 months ago | hide | past | web | favorite | 50 comments

A growing group of us are implementing Clasp (https://github.com/clasp-developers/clasp) - a new Common Lisp that uses Robert Strandh's Cleavir compiler, interoperates with C++ and uses LLVM as the backend. Clasp is the foundation of Cando - a programming environment for the development of molecular devices (https://www.youtube.com/watch?v=mbdXeRBbgDM&feature=youtu.be). Robert Strandh is an exceptionally clear thinker and a fantastic programmer. I can't count the times that Robert has implemented changes to the Cleavir compiler source code and without him being able to test it (long story) passed it to me and it ran flawlessly in Clasp. I implemented his fast generic function solution within Clasp based on another of his papers (http://metamodular.com/generic-dispatch.pdf). Edit: Basically - if Robert writes it up in a paper - I want to have the implementation.

Once we develop molecular robots - they are going to implement LispOS. :-)

Oh, this by Robert Strandh, whose also writing SICL, a very cool, modular implementation of Common Lisp in Common Lisp for bootstrapping. The idea is that other implementations of Common Lisp in Common Lisp can take modules from SICL. Already Christian Schafmeister's Clasp, another very interesting implementation of Common Lisp (using LLVM and made for easy interop with C++) uses parts of SICL, particularly the compilation framework Cleavir.

From Chapter 1.4:

"The most important aspect of a Lisp operating system is not that all the code be written in Lisp, but rather to present a Lisp-like interface between users and the system and between applications and the system."

I always felt that on the Lisp Machine one of the great things are the Lisp-like interfaces into the system. For a lot of stuff there is for example very little manual memory management necessary and the whole thing is very introspective at runtime - due to a very object-oriented architecture. The operating system runs most of the time in a way, that would be beyond a typical debug mode on conventional machines: all data is tagged, almost all data is an instance of some class, all basic functions are type checked and bounds checked at runtime, all function calls are on the stack and many functions have (optional) named arguments.

A developer would also run the whole OS with full cross references, documentation references and source references. Then there are selective ways to run some code with higher debug features, like source locators for local variables/functions or using the Lisp interpreter for some code - usually code would run compiled, but the interpreter would support things like source stepping and seeing macro expansions.

If this is too be run on regular CPUs, since assembly programming is imperative, won't there necessarily be non-Lisp-like code underneath, with (hopefully) Lisp-like interfaces?

Classical Lisp was never really big on FP. Scheme is the variant in which FP is more emphasized.

If you look at the code of OpenGenera, it’s mostly OO/imperative.

Though modern FP has significantly diverged from Lisp and it’s roots. Things like static-typing, higher kinded types, etc etc

To cut to the chase, is the takeaway claim that it's easy+performant to write low level code in lisp? Out of curiosity, what would be good examples along these lines? Thanks!

Check out https://github.com/mietek/mit-cadr-system-software Itss a tape dump of the CADR system. Also you could check out OpenGenera, but it's not (legally) available.

If you want modern examples, check out SBCL, the Common Lisp compiler https://github.com/sbcl/sbcl

While a Scheme and not a Lisp, you can check out Gambit Scheme. https://github.com/gambit/gambit

It's written in a functional style and compiles Scheme to C. The generated code is really fast.

As an aside, there really should be no difference in performance between a loop and a tail recursive call, assuming the compiler does TCO. So if that's your definition of "functional", then it should have the same performance as any "imperative" language.

Several examples inline, and links to more, here:


Lisp has almost never been a "no imperative code" language.

I feel like the concept of lisp speaks to the part of me searching for simple, holistic consistency. The same part of me that wants to implement the general solution rather than the specific solution.

While I've grown a lot since those times when I would act on those thoughts more I still get a lot of satisfaction from moments when I get to exercise that part of me.

I would love to get involved with a project that is doing this. Are there any others other than Mezzano?

I feel GuixSD is approaching this with Shepherd for init and Guix for package management, Emacs and then some lisp wm?

Was it ever implemented? Are there any usable (at least working in a VM with networking) Lisp OSes out there?

Mezzano [1] is written in Common Lisp and can be run on Virtualbox and QEMU.

[1] https://github.com/froggey/Mezzano

I install and experiment with Mezzano once or twice a year. Very cool project.

I liked most of the LispOS paper. One point of disagreement is planning for multi-user capability. I don’t think a LispOS based system would ever be more than a personal and highly customizable workspace. I would never expect to fire up a VPS on AWS or GCP and see a LispOS option next to Linux, BSD, etc.

I used a Xerox 1108 Lisp Machine from 1982 to about 1986. I stopped using it and gave it to someone else in my company when I was able to port the ISI Grapher to Coral Common Lisp on my Mac and thereby was able to port most of my code over.

To be clear, I moved off of the 1108 only because the hardware had become outdated and it ran too slowly. Otherwise I was very happy with a single user all Lisp work environment.

" point of disagreement is planning for multi-user capability. "

Multiuser tends to go hand in hand with other things, like good security though. No doubt a modern single user system could avoid the mistakes of Windows etc. But as you're already going most of the way there, why not just build multiuser in from the start?

I'm sure Mr Torvalds never expected to fire up a VPS and have the option of Linux along side big and professional OSes like GNU and Unix.

This is similar to TempleOS where the creator wanted to make a powerful sport bike of an OS and didn't bother implementing multiuser functionality.

I agree and wish an OS was turtles all the way down with extensive scientific computing builtin. How many layers of indirection do I need? I'm stuck on Windows so I have to deal with that, .NET, Python, and numeric software like Mathematica, as well as the 50 bazillion programs I use ...all with fragile interfaces. Maybe that only seems suboptimal though and a true LispOS and Smalltalk OS running on baremetal would never truly work with the general populace?

> Maybe that only seems suboptimal though and a true LispOS and Smalltalk OS running on baremetal would never truly work with the general populace?

Been thinking about this a lot lately. What would a regular laptop user, say, or advanced user (office worker etc) need at a minimum to start using such a system? Probably just a functioning web browser and office suite. So long as any given LispOS/SmalltalkOS had those, and a working UI, I think it could actually work.

We have many more common and widely adopted standards now (XML formats, JSON, hell TCP/IP!) than we did back in the 70s and 80s when PCs were much more diverse and exotic -- and hence incompatible. Most things are compatible these days. This in theory gives us tremendous advantage when it comes to experimenting with new viable systems, don't you think?

I've been working on a fantasy console, which touches on this "design down" space. It's the kind of endeavor that lets me iteratively redesign to go deeper and deeper into the stack, by starting off with "scripting plus some kind of I/O", gradually extending the concept, and then, more recently, to shift towards using WASM, memory maps, a BIOS layer, and an emulator UI layer, giving me more spec clarity.

The I/O boundaries and resource limits are the really important bits: A web browser is ultimately just a bundle of I/O functions buried inside specialized feature APIs, with only some very specific "patch this" limits for maximum resource consumption. This has allowed the size of web apps to creep upwards indefinitely while not being able to actually do every task, because the feature APIs sandbox in a haphazard "intended for the average consumer" way. When you add in comprehensive limits, the design and the software built on it are able to last: it's the lack of limits that has always created software crises.

One could imagine, instead of the browser as a lowest-common-denominator sandbox, a fantasy system specialized "for text editing", which has a different spec from one specialized "for 3D graphics": This would narrow the scope of sandboxing, make it easier to task-optimize, and yet also create the need for a Lisp or Smalltalk type of system that needs flexibility "for prototyping". It's not an unreasonable path, considering how broader trends in computing are leading towards hardware specialization too.

Yes!, especially if the available web browser could use Office 365 or G Suite.

We need a lot of the features of a modern multi-user operating system to safely implement a modern JavaScript web browser, though. Perhaps so many that adding multi-user on later would not be much more work.

Why not? It's like any language runtime, except way more flexible. Jonathan Rees also did some work on security kernels for it:


Although, I think my default idea of a separation kernel plus (runtime/VM here) + (Camkes or Cap n Proto) for glue would work for LISP images, too. Throw in VLISP for as much component reliability as you get flexibility.

Symbolics Genera has a virtualized version that was used for production workloads.

It makes me sad how locked up and obscure Genera became. It could have been widespread and community developed and its ideas more pollinated into common use, if not for the business side.

Just like UNIX would have been if it wasn't for Bell Labs being prevented to use it commercially.

The AT&T lawsuit against BSD only came after that prohibition was lifted.

Emacs :) A great operating system that needs an editor

This is an old joke, but a good one. Modern emacs has full colour graphics,a window manager, a decent browser, evil makes editing possible for people with ten fingers or less...

I (and many others) have been thinking about a system that boots to Emacs. With the guile backend we might get enough speed to make it tolerable fulltime.

The only thing missing (and I bet somebody has done this too) is a way to run graphical ELF binaries with an embedded X/Wayland type thing.

I use EXWM and it's basically EmacsOS.

Wooow !! I've been looking for this for years !! Currently using emacs + dwm. I'll check out EXWM see if it can beat my current workflow :D

It still misses some nice XEmacs features.

Really? What are they?

It has been a while, since I eventually got back into the comfy land of IDEs, but looking at my default Ubuntu installation.

Buffers being graphical, including the possibility to work with images inline Lisp Machine style.

The gdb integration was more ergonomic, felt closer to something like DDD.

GNU Emacs can work with inline images.

Cool! Thanks for correcting me.

That's why you have Evil :trollface:

evil is eval

From what I've read about Lisp machines like the Symbolics workstation or the TI Explorer, one of the huge advantages of a Lisp OS is that all of the OS functions are written in Lisp itself and are open source. Think of the ease of interfacing with something like that vs dealing with the various software brick walls and APIs you need to program in Windows. Want to know what the OS is doing? Read the code. Want to debug something? Step into the OS itself. Don't like your UI? Change it.

Lisp machines were always meant as workstations for programmers rather than office users, and it was expected that they would want to hack every part of the OS. That's pretty cool.

> Lisp machines were always meant as workstations for programmers rather than office users

Lisp Machine were initially developed inhouse (MIT, Xerox, BBN, ...) as workstations for AI software development. They were also used for early non-AI stuff, since they were object-oriented, had a GUI and were hardware extensible.

The commercialization by Xerox, Symbolics, LMI, TI, and some others did also address developers initially - but at some point in time the machines were also deployed in machine rooms in production as servers or as systems for 'end users'. That happened once interesting software was developed and need to deploy - remember that the early stuff was all financed by the military and they also deployed some of the stuff which got developed - some really fancy stuff like battle simulation systems, where the Lisp Machine created and maintained virtual worlds for training soldiers.

For example Symbolics sold a whole range of machines and software to graphics&animation. A lot of the early computer graphics you saw on TV were done on Symbolics machines. The users then were graphics artists, animators, 3d graphics experts, ... Thus the user interfaces and the machines themselves needed to be made end-user friendly. A bunch of the machines were deployed in production. For example Swissair used a software running on TI explorers to optimize the sales of flight tickets - it was interfaced to their booking software running on mainframes. It was extremely difficult to port that software away from the TI Explorer and for many years they bought all remaining used TI Explorers all around the globe to keep their system running - long after TI has exited that business. There are a bunch of other users of Lisp machines who were not programmers - for example in military.

It was also thought that Lisp Machines could run as embedded systems in weapon systems (autonomous submarines, cruise missiles, ...), onboard a space station, and in other settings - that was then not really happening. But for example AT&T Lucent had a Symbolics designed Lisp Machine board for a rack-mounted installation and they were switching nodes in a telephony network switch. Thus the operating system of that Lisp Machine was stripped down for use in embedded systems (called Minima).

Good info here but Lisp has not historically been "object oriented." The CLOS (Common Lisp Object System) was added later as an abstraction layer over more fundamental Lisp concepts.

One of the most beautiful things about Lisp is that has neither modules, objects, routines, callbacks, variable types, or any of the other cruft of more recent languages. (It is the second oldest computer language.) But -- it can provide all of those things if the programmer wants them.

> Lisp has not historically been "object oriented."

Historically Lisp was among the first object-oriented languages in the mid 70s. It was also the first standardized object-oriented language (in 1994).

The original Lisp was used for object-oriented modeling in the 60s - way before objects were identified as a programming paradigm. Objects were defined as symbols with property lists. Property lists were pairs of keys and values (sounds familiar?). Early Lisp papers also used the idea of classes and more.

> was added later as an abstraction layer over more fundamental Lisp concepts

No, CLOS was added as a new programming paradigm into the first Common Lisp variant - similar how one would add a Prolog-like language to Lisp. CLOS was also a third or fourth generation system. It was based on earlier object-oriented extensions to Lisp Lisp: LOOPS, Common LOOPS, Flavors, New Flavors, some more primitiv Class system, ...

Lisp did not include many/most of the CLOS abstractions, but Lisp enables to implement and integrate them seamlessly.

It depends on what you mean by "object-oriented".

A Lisp with no formal OOP system still has objects: conses, strings, symbols. These are nicely abstracted: they are defined by operations which construct, access and otherwise manipulate them. These entities have an identity (which we can test with eq) and carry a type. They can move around freely in the program.

There is genericity without OOP: features like car working not only on a cons cell, but also on the nil object. Or length calculating the length of a vector, string or list.

All of this stands in contrast to a procedural paradigm in which there are just storage locations operated on by code, having whatever type the code believes them to have. (Algol 68, Fortran 66, C).

I think OOPS as a paradigm is pretty well defined without my laying it out. The facts are that Lisp did not have OOPS at the beginning but it's easy to add, and that's one of the greatest strengths of Lisp as a programming language. You can build any programming concepts from modern languages that you want -- but you aren't constrained by any of them.

The historical reason given right at the start for the design of processes seems wrong. What do others think?

Sounds plausible to me. Adding virtual memory to get around a constraint of limited physical hardware & lack of MMU sounds like a software abstraction, which is what software engineers building an operating system are likely to want to build.

Your reasoning is valid, but it only explains virtual memory, not processes. Processes are primarily a means of isolation. If anything they're liable to increase memory usage, because the lookup tables need to be saved somewhere.

See also my comment at https://news.ycombinator.com/item?id=19121443#19129363

What exactly do you say is wrong with it and why? Just stating "it seems wrong" without any backing words isn't making your intention clear.

I didn't want to contaminate readers with my understanding. But enough time has passed now.

OP says:

> The computers for which UNIX was intended had a very small address space; too small for most usable end-user applications. To solve this problem, the creators of UNIX used the concept of a process.

My understanding is that processes were designed as a way to isolate users on a time-shared machine. Limitations on RAM were certainly correlated, but not the direct cause for the design of processes.

> A large application was written so that it consisted of several smaller programs, each of which ran in its own address space.

Pipelines don't save you space. All the processes in a pipeline run concurrently so their memory usage overlaps as well. If anything, splitting up into processes increases memory usage, because each process may not use all of the memory allocated to it.

Pipelines were designed as a way to reuse a small vocabulary of shared tools. Efficiency was only a secondary constraint. The primary goal was easily creating elegant programs for simple tasks.

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact