ChrysaLisp is an Assembler/C-Script/Lisp 64 bit OS. MIMD, multi CPU, multi threaded, multi core, multi user, and will build from source in ~2 seconds on most OS X or x64 Linux desktops.
Maybe this will let me live vicariously through someone who has actually built the project I always dream of creating in my free time.
I think the one major thing that caused me to give up on the project was whenever it became time to refactor or to think about the big picture (for ex. where do the drivers live? what about a libc implementation - can you just get glibc to work on your OS? Or do you have to rewrite the whole of libc?), I found that I was just copying whatever linux had already done. The feeling that I wasn't really doing any original work and am making a shittier version of an existing system demotivated me for some reason even though this was just supposed to be a learning exercise. So some advice for anyone attempting this - be prepared to go out of your way to rewrite significant amounts of helper code that has nothing to do with actually building your OS.
I made similar OS. ACPU OS run on mobiles/desktop/browsers/bare-metal, with multiuser p2p team development livecoding, time travel debugger, multiple libraries, startup in 30 seconds on iPad in development mode with synchronization, compile sources & symbol navigations, etc. Powered by ACPUL programming language.
Here is OS API:
Simple multiplayer DOTA game prototype:
Follow us: https://twitter.com/ACPUStudio
I've never used one, but have heard people rave about how productive they were on them.
This project is cool, but, in short, no.
Please take a look at this paper, which answers the questions of which benefits could be obtained with a Lisp OS. This goes regardless of if such OS has or does not has a nice IDE.
This is a really good paper, recommended lecture.
I expect this will be a fairly controversial comment, so I want to preface this by saying that I'm a big Lisp fan (just look at my handle). Lisp is my favorite programming language. I've been using it for nearly forty years. My first Lisp was P-Lisp on an Apple II in 1980. And I worked on Symbolics Lisp machines in the 1990s. They were very cool, but there's a reason they failed: general-purpose computing is infrastructure, and the economics of infrastructure are such that having a single standard is the most economical solution, even if that standard is sub-optimal. For better or worse, the standard for general-purpose computing is the C machine.
Because it's general-purpose you certainly can run Lisp on a C machine (just as you could run C on a Lisp machine). You can even do this at the system level. But Lisp will always be at a disadvantage because the hardware is optimized for C. Because of this, C will always win at the system level because at that level performance matters.
But that in and of itself is not the determining factor. The determining factor is the infrastructure that has grown up around the C machine in the last few decades. There is an enormous amount of work that has gone into building compilers, network stacks, data interchange formats, libraries, etc. etc. and they are all optimized for C. For Lisp to be competitive at the system level, nearly all of this infrastructure would have to be re-created, and that is not going to happen. Even with the enormous productivity advantages that Lisp has over C (and they really are enormous) this is not enough to overcome the economic advantages that C has by virtue of being the entrenched standard.
The way Lisp can still win in today's world is not by trying to replace C on the system level, but by "embracing and extending" C at the application level. I use Clozure Common Lisp. It has an Objective-C bridge, so I can call ObjC functions as if they were Lisp functions. There is no reason for me to know or care that these functions are actually written in C (except insofar as I have to be a little bit careful about memory management when I call C functions from Lisp) and so using Lisp in this way still gives me a huge lever that is economically viable even in today's world. I have web servers in production running in CCL on Linux, and it's a huge win. I can spin up a new web app on AWS in just a few minutes from a standing start. It's a Lisp machine, but at the application level, not the system level. My kernel (Linux) and web front end (nginx) are written in C, but that doesn't impact me at all because they are written by someone else. I just treat them as black boxes.
I don't want to denigrate ChrysaLisp in any way. It's tremendously cool. But cool is not enough to win in the real world.
[UPDATE] ChrysaLisp is actually doing the Right Thing with respect to its GUI by using a C-library (SDL). But it's trying to re-invent the compiler wheel (and the language design wheel) so that it can run on bare metal and "grow up to be a real Lisp machine" some day, and I think that aspect of the project is a fool's errand. There are already many Lisps that can run on bare metal (ECL was specifically designed for that). None of them have succeeding in displacing C, and I believe none ever will because the economic hurdles are insurmountable.
I basically agree with everything you wrote, but for two subsidiary points.
1 - it's a phenomenal development platform. I have on multiple occasions built large systems later deployed, for the reasons you discussed, via C ports. It didn't nullify the work done in Lisp: we learned a lot, and quickly; were able to do quick experiments, etc. But ultimately we needed to join the wider world.
2 - there is / has been a sea change in the software deployment model; it includes web browsers, even more rapid and more flexible development etc. Clojure is a good example and consequence of this change. It's quite possible that the circumstances that led to Lisps decline (though was it really ever "mainstream?") may swing the tide the other way. E.g. "Rapid development and realtime deployment of lisp-based microservices" to agglomerate some popular buzzwords.
FWIW: I was a developer from MACLISP to MIT CADRS, various Symbolics machines (at one point I had two 36xx machines on my desk with color monitors attached) and various D machines. Right now I'm doing my development in C++ though.
What was this?
The hardware was not powerful - for example they had very little RAM.
But an emulator was developed for SPARC/UNIX and X86/Linux. Called Medley.
When I worked at PARC I mostly used Interlisp, but I did use all those environments.
Interestingly, another current top post on HN is HPAT - A compiler-based framework for big data in Python (https://news.ycombinator.com/item?id=15466829). This replaces C level languages with something like Python for a specific domain. That could be somewhere lisp wins as well?
As for the C machines, Moore's law already has a foot in the grave. Ten years ago, we would have just assembly-optimized any inner-loop bottlenecks and assumed that in a few more months Intel would get us the rest of the way there. Now I'm seeing more and more being pushed off into GPUs and custom ASICs. With clock speeds topped-out, the mainframe approach of pushing more into the hardware grows increasingly attractive.
That, and so much is written in other garbage-collected languages today that LISP-friendly hardware could benefit Python/C#/Java/PHP programs as much as it would LISP ones.
Please write an article about it and post a link to it here.
The heavy lifting is all done by Hunchentoot:
Take a look at the "lack" and "Clack" packages for Common Lisp by Eitaro Fukamachi; they provide a layer of abstraction so your program thinks it has a web server, but in production it can be run by various Lisp web servers (i.e. Hunchentoot) without changes to your code.
I wasn’t aware of Hunchentoot.
Take a look at Clack. (Caveman2 is a web framework for Common Lisp, by Eitaro ("8arrow") Fukamachi; another recent web framework is Lucerne, by Fernando Borretti ("eudoxia0"))
Caveman is built on top of Clack and I think Lucerne is, too.
Another good option is Ningle, by Eitaro, which is a really mínimal minimal set of functions and macros to create a web application. Runs on top of Clack as well.
I usually run nginx as a front-end because it's faster at serving static content and it provides some security benefits, but it's not necessary.
But is winning in the real world really important? OPs question was after all "Does this roughly offer the same benefits that the old lisp machines provided", and as LISP machines did not really win in the real world the first time around, I don't think that is really prerequisite for bringing the benefits of a LISP machine.
In fact, Clozure Common Lisp is a perfect example. It provides (IMHO) 80-90% of the productivity advantages of a LispM when you run it on a Mac (because the IDE is Mac-specific).
Cool is synergistic with other qualities. On top of that, everything is contextual. Does anyone know of any particularly cool ClojureScript electron apps?
LightTable was cool.
In classical OS (Linux, Windows, Mac) the developer is responsible for seemless integration (in KDE or MS Office for instance). It is easy to break design guidelines here which causes ugly interoperation, or even lack of interoperation.
At least in part, because of storage and memory constrained things like home routers.
I'm not so sure about the actual intentions behind the development beyond "experimenting".
A few years later I got a Xerox 1108 Lisp Machine at work, a cheaper alternative to your Symbolics, but very nice.
(I particularly like: "I strongly encourage you not to actually use this to learn Lisp!" :-)
nevertheless nice experiment
Jokes aside, you could probably call this a "desktop environment" and communicate most of the meaning. What remains is either a "virtual machine", "runtime system", or something along those lines.
$ clone https://github.com/vygr/ChrysaLisp.git ChrysaLisp
$ cd ChrysaLisp
$ sudo apt-get install libsdl2-ttf-dev
$ make -j
0) Make sure you have a recent Xcode installed
1) Install the libsdl2 and libsdl2_ttf frameworks under /Library/Frameworks
3) cd ChrysaLisp
4) make (takes about 2s!!)
5) ./run.sh (shows full working gui. Amazing!)
Yes, I have both frameworks installed. I even did the brew commands just for S&G ... same result after make command.
antimass@gem:~$ ls /Library/Frameworks/SDL*
drwxr-xr-x@ 22 Sep 14:54 SDL2.framework
drwxr-xr-x@ 1 Feb 2016 SDL2_ttf.framework
cc -o obj/Darwin/x86_64/main obj/Darwin/x86_64/main.o -Wl,-framework,SDL2 -Wl,-framework,SDL2_ttf
ld: framework not found SDL2
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [obj/Darwin/x86_64/main] Error 1
echo x86_64 > arch
echo Darwin > platform
unzip -nq snapshot.zip
cc -c -nostdlib -fno-exceptions \
-o obj/Darwin/x86_64/main.o main.c
cc -o obj/Darwin/x86_64/main obj/Darwin/x86_64/main.o -Wl,-framework,SDL2 -Wl,-framework,SDL2_ttf
Maybe somebody can try running it from OpenFirmware (see my my sibling post to yours), which has been ported to lots of platforms, and has lots of code one could borrow from in the way of drivers. Of course at the end of the day I guess the point of QEMU / Xen is to use Linux as a device driver.
Does anyone use it in production? Is it comparable to JS?
It was almost mainstream in the 1980s when there were four Lisp Machine companies (Xerox, Symbolics, Texas Instruments, LMI) and symbolic AI, particularly expert systems, were used. Not so much now.
I still use it. If you know it you can be much more expressive and productive in it than in other mainstream languages, and when there's something you need that's missing from Lisp, you can extend Lisp to include it.
There are reasons it's fallen from favour, some of which lisper has given here. It's a mistake to languages necessarily succeed or fail on their merits.
In the late early 80s it was rather popular but it required expensive specialized hardware and software. Or you could execute it on more common hardware, but memory requirements were pretty high, which adds to the expense.
It's not like in 2017 where you can have many excellent Lisp implementations for free and they run blazingly fast on your machine, without needing more memory than what you have already.
It also has something to do with the triumph of UNIX over the Lisp machines
Comparing to this lisp os its a bit unfair though. This is an async distributed actor model os with various practical shortcuts. Refcounted with manually breaking up cycles. The lisp is very rough, but extremely fast. No linked lists (i.e. cons cells), just vectors. (Very good idea btw, same as in potion).
A better macro assembler with message passing OO (smalltalk, Erlang but better than beam).
I still want to see the other TAOS features though: selfrouting network, dynamic binding
PS> musl libc is beautiful.
No, it lacks a lot of things in comparison, and if it only lacks macros, then it's lacking a lot as well.
I could stop here, because these two above are strong, significant differences enough to tip the balance towards CL. But there's more:
3. A really good object system. CLOS might be the most advanced OOP system out there.
5. Image-based interactive development
6. A killer exception handling system: The condition-restart system.
7. Speed. CL can execute significantly faster than JS and can approach C/Fortran speeds if needed (by using tricks.)
8. Optional static type checks, and type declarations for increased performance if needed.
There are even more differences, to be honest.
One question, you say "CL is very strongly typed". Then you also say "Optional type checks". Are you saying CL is optionally very strongly typed?
Note that I wrote "optional static type checks".
CL is always strongly typed. A character array, for example, stays as a character array. It can't be used directly in place of a byte array, for example.
But what it allows, optionally, are type declarations. These, in implementations like SBCL, allow compile-time ("static") type checking.
The declarations also allow big enhancements in code speed, because this produces code that is specific to a data type (you can always see the compiled assembly code for a lisp function by using the "disassemble" function)
"The immediate concern at Netscape was it must look like Java. People have done Algol-like syntaxes for Lisp but I didn't have time to take a Scheme core so I ended up doing it all directly and that meant I could make the same mistakes that others make."
See https://github.com/froggey/Mezzano for something that can run on bare metal.
In other words, it can become an OS if you merely add to it the exact things that define an OS.
The language runtime is the OS.
I pick "PC INTERN: system programming", 1992.
Tanenbaum and Silberschatz both have excellent textbooks covering the fundamentals of Operating Systems. Neither book focuses much on "language runtimes".
Anyone that actually programmed MS-DOS, knows that we used to program directly against the hardware for actual work. MS-DOS was nothing more than what is usually known as monitor in OS literature.
Continuing the texts from people more relavant to the CS world than me,
"An operating system is a collection of things that don't fit into a language. There shouldn't be one." - Dan Ingalls on
"Building Parallel, Embedded, and Real-Time Applications with Ada" - John McCormick
"Project Oberon: The Design Of An Operating System And Compiler" - Niklaus Wirth
And not to let this just be theory, here are a few examples of commercial products using the language runtime to interface with the hardware.
Like I said, I'm not interested in debating the definition of OS. Best regards.
"Could move to bare metal eventually but it's useful for now to run hosted while experimenting"
And this: https://github.com/vygr/ChrysaLisp/blob/master/sys/kernel.vp
That said, I agree with you that it kinda stinks this doesn't have its own kernel - it really is just a GUI without it. It surprises me that so many people are blowing it off as no big deal. Writing the kernel is not a simple detail, we're talking about a lot of functionality that is missing that takes a lot of work to get right.
That said, to me, an operating system is basically any environment that allows me to run applications.
Usually, that implies a kernel, since you can't do much without one, but in some circumstances, you might not have a "kernel" in the traditional sense.
For example, LXC containers don't have their own kernel, but they provide an environment - binaries, libraries, etc - where I can run applications. I can rightly call that an OS, I think.
An operating system is the software used to interface hardware and applications. If you miss out on control of hardware it's not an OS. At most, solutions like unikernels are an edge case, but anything with less direct control over hardware than unikernels are not. If the term 'OS' is too loose it becomes meaningless.