Once we develop molecular robots - they are going to implement LispOS. :-)
"The most important aspect of a Lisp operating system is not that all the code be written in Lisp, but rather to present a Lisp-like interface between users and the system and between applications and the system."
A developer would also run the whole OS with full cross references, documentation references and source references. Then there are selective ways to run some code with higher debug features, like source locators for local variables/functions or using the Lisp interpreter for some code - usually code would run compiled, but the interpreter would support things like source stepping and seeing macro expansions.
If you look at the code of OpenGenera, it’s mostly OO/imperative.
Though modern FP has significantly diverged from Lisp and it’s roots. Things like static-typing, higher kinded types, etc etc
If you want modern examples, check out SBCL, the Common Lisp compiler https://github.com/sbcl/sbcl
While a Scheme and not a Lisp, you can check out Gambit Scheme. https://github.com/gambit/gambit
It's written in a functional style and compiles Scheme to C. The generated code is really fast.
As an aside, there really should be no difference in performance between a loop and a tail recursive call, assuming the compiler does TCO. So if that's your definition of "functional", then it should have the same performance as any "imperative" language.
While I've grown a lot since those times when I would act on those thoughts more I still get a lot of satisfaction from moments when I get to exercise that part of me.
I would love to get involved with a project that is doing this. Are there any others other than Mezzano?
I liked most of the LispOS paper. One point of disagreement is planning for multi-user capability. I don’t think a LispOS based system would ever be more than a personal and highly customizable workspace. I would never expect to fire up a VPS on AWS or GCP and see a LispOS option next to Linux, BSD, etc.
I used a Xerox 1108 Lisp Machine from 1982 to about 1986. I stopped using it and gave it to someone else in my company when I was able to port the ISI Grapher to Coral Common Lisp on my Mac and thereby was able to port most of my code over.
To be clear, I moved off of the 1108 only because the hardware had become outdated and it ran too slowly. Otherwise I was very happy with a single user all Lisp work environment.
Multiuser tends to go hand in hand with other things, like good security though. No doubt a modern single user system could avoid the mistakes of Windows etc. But as you're already going most of the way there, why not just build multiuser in from the start?
I'm sure Mr Torvalds never expected to fire up a VPS and have the option of Linux along side big and professional OSes like GNU and Unix.
I agree and wish an OS was turtles all the way down with extensive scientific computing builtin. How many layers of indirection do I need? I'm stuck on Windows so I have to deal with that, .NET, Python, and numeric software like Mathematica, as well as the 50 bazillion programs I use ...all with fragile interfaces. Maybe that only seems suboptimal though and a true LispOS and Smalltalk OS running on baremetal would never truly work with the general populace?
Been thinking about this a lot lately. What would a regular laptop user, say, or advanced user (office worker etc) need at a minimum to start using such a system? Probably just a functioning web browser and office suite. So long as any given LispOS/SmalltalkOS had those, and a working UI, I think it could actually work.
We have many more common and widely adopted standards now (XML formats, JSON, hell TCP/IP!) than we did back in the 70s and 80s when PCs were much more diverse and exotic -- and hence incompatible. Most things are compatible these days. This in theory gives us tremendous advantage when it comes to experimenting with new viable systems, don't you think?
The I/O boundaries and resource limits are the really important bits: A web browser is ultimately just a bundle of I/O functions buried inside specialized feature APIs, with only some very specific "patch this" limits for maximum resource consumption. This has allowed the size of web apps to creep upwards indefinitely while not being able to actually do every task, because the feature APIs sandbox in a haphazard "intended for the average consumer" way. When you add in comprehensive limits, the design and the software built on it are able to last: it's the lack of limits that has always created software crises.
One could imagine, instead of the browser as a lowest-common-denominator sandbox, a fantasy system specialized "for text editing", which has a different spec from one specialized "for 3D graphics": This would narrow the scope of sandboxing, make it easier to task-optimize, and yet also create the need for a Lisp or Smalltalk type of system that needs flexibility "for prototyping". It's not an unreasonable path, considering how broader trends in computing are leading towards hardware specialization too.
Although, I think my default idea of a separation kernel plus (runtime/VM here) + (Camkes or Cap n Proto) for glue would work for LISP images, too. Throw in VLISP for as much component reliability as you get flexibility.
The AT&T lawsuit against BSD only came after that prohibition was lifted.
I (and many others) have been thinking about a system that boots to Emacs. With the guile backend we might get enough speed to make it tolerable fulltime.
The only thing missing (and I bet somebody has done this too) is a way to run graphical ELF binaries with an embedded X/Wayland type thing.
Buffers being graphical, including the possibility to work with images inline Lisp Machine style.
The gdb integration was more ergonomic, felt closer to something like DDD.
Lisp machines were always meant as workstations for programmers rather than office users, and it was expected that they would want to hack every part of the OS. That's pretty cool.
Lisp Machine were initially developed inhouse (MIT, Xerox, BBN, ...) as workstations for AI software development. They were also used for early non-AI stuff, since they were object-oriented, had a GUI and were hardware extensible.
The commercialization by Xerox, Symbolics, LMI, TI, and some others did also address developers initially - but at some point in time the machines were also deployed in machine rooms in production as servers or as systems for 'end users'.
That happened once interesting software was developed and need to deploy - remember that the early stuff was all financed by the military and they also deployed some of the stuff which got developed - some really fancy stuff like battle simulation systems, where the Lisp Machine created and maintained virtual worlds for training soldiers.
For example Symbolics sold a whole range of machines and software to graphics&animation. A lot of the early computer graphics you saw on TV were done on Symbolics machines. The users then were graphics artists, animators, 3d graphics experts, ... Thus the user interfaces and the machines themselves needed to be made end-user friendly. A bunch of the machines were deployed in production. For example Swissair used a software running on TI explorers to optimize the sales of flight tickets - it was interfaced to their booking software running on mainframes. It was extremely difficult to port that software away from the TI Explorer and for many years they bought all remaining used TI Explorers all around the globe to keep their system running - long after TI has exited that business. There are a bunch of other users of Lisp machines who were not programmers - for example in military.
It was also thought that Lisp Machines could run as embedded systems in weapon systems (autonomous submarines, cruise missiles, ...), onboard a space station, and in other settings - that was then not really happening. But for example AT&T Lucent had a Symbolics designed Lisp Machine board for a rack-mounted installation and they were switching nodes in a telephony network switch. Thus the operating system of that Lisp Machine was stripped down for use in embedded systems (called Minima).
One of the most beautiful things about Lisp is that has neither modules, objects, routines, callbacks, variable types, or any of the other cruft of more recent languages. (It is the second oldest computer language.) But -- it can provide all of those things if the programmer wants them.
Historically Lisp was among the first object-oriented languages in the mid 70s. It was also the first standardized object-oriented language (in 1994).
The original Lisp was used for object-oriented modeling in the 60s - way before objects were identified as a programming paradigm. Objects were defined as symbols with property lists. Property lists were pairs of keys and values (sounds familiar?). Early Lisp papers also used the idea of classes and more.
> was added later as an abstraction layer over more fundamental Lisp concepts
No, CLOS was added as a new programming paradigm into the first Common Lisp variant - similar how one would add a Prolog-like language to Lisp. CLOS was also a third or fourth generation system. It was based on earlier object-oriented extensions to Lisp Lisp: LOOPS, Common LOOPS, Flavors, New Flavors, some more primitiv Class system, ...
Lisp did not include many/most of the CLOS abstractions, but Lisp enables to implement and integrate them seamlessly.
A Lisp with no formal OOP system still has objects: conses, strings, symbols. These are nicely abstracted: they are defined by operations which construct, access and otherwise manipulate them. These entities have an identity (which we can test with eq) and carry a type. They can move around freely in the program.
There is genericity without OOP: features like car working not only on a cons cell, but also on the nil object. Or length calculating the length of a vector, string or list.
All of this stands in contrast to a procedural paradigm in which there are just storage locations operated on by code, having whatever type the code believes them to have. (Algol 68, Fortran 66, C).
See also my comment at https://news.ycombinator.com/item?id=19121443#19129363
> The computers for which UNIX was intended had a very small address space; too small for most usable end-user applications. To solve this problem, the creators of UNIX used the concept of a process.
My understanding is that processes were designed as a way to isolate users on a time-shared machine. Limitations on RAM were certainly correlated, but not the direct cause for the design of processes.
> A large application was written so that it consisted of several smaller programs, each of which ran in its own address space.
Pipelines don't save you space. All the processes in a pipeline run concurrently so their memory usage overlaps as well. If anything, splitting up into processes increases memory usage, because each process may not use all of the memory allocated to it.
Pipelines were designed as a way to reuse a small vocabulary of shared tools. Efficiency was only a secondary constraint. The primary goal was easily creating elegant programs for simple tasks.