I wish other Common Lisp implementations, particularly Clozure CL, documented their internals as well as SBCL does. Apple Silicon support has been a long-standing feature request, but nobody can really work on it besides rme himeself due to the sheer amount of "institutional knowledge" (and time!) it takes to understand CCL's compiler backend then add support for a brand new architecture. SBCL, on the other hand, was able to provide support fairly quick, possibly due to more maintainers, but I also think a big role that played in it was the fact lots of corners of the implementation are documented and it's fairly easily to get a hold of people actively working on the compiler (doug, stassats, et al)
> I recently decided to leave my current full-time job, and I’ll be working on CCL again. […] I’ll be able to work about half-time on an ARM64 port. Please write me privately if you want to talk about supporting that ARM64 work.
> I've worked on CCL for quite a few years, and I did the 32-bit x86 port, so I have experience in this area. I’m not as good a hacker as Gary Byers, but then again, few people are.
My issue prevent me from investing in Lisp is that Erlang gives me a similar REPL which I believe, though I am not a Lisp developer, is a huge value proposition of Lisp. I utilize REPL-based development using Erlang.
The macro narrative is worse in Erlang; but Lisp isn't perfect either in other ways.
The point is, I net out staying on the BEAM. What I like about Lisp is that I think it's a tool that will serve you for the rest of your career, even if you don't use it for production. That's a pretty cool proposition and maybe one day I'll get there. Emacs is a similar tool.
Last time I played with Elixir (I'll assume it's similar enough to Erlang), I found the REPL / interactive experience not as rich as CL's (which I must say, reassured my beliefs), even though the command-line tools are great:
- can't install libraries from the REPL while working on your program
- can't "compile this function" with a keystroke (I did find "send region/line/buffer to REPL" which is different, or "reload module")…
- … and get type errors and warnings
- can't get an interactive debugger on an error, go to the bogus line (while keeping the debugger), fix the function, compile it, switch back to the debugger, resume execution from any point in the stack. I only found out IEx.pry which is useful and all, but looks like Python's ipdb break point.
- lack of little useful editor commands, at least in the Emacs modes I tried like "write this function in the REPL", "eval and print the last expression". I struggled to have the "go to definition" feature but that depends on the IDE I suppose. The Emacs modes are not in good shape :S
- no cross-references? doubting. And maybe other small things only about the REPL, didn't find my notes.
differences of runtimes are also at play here, but I still find it interesting to compare the REPL experience, I'll welcome more comparisons.
BEAM/OTP is rather different from CL:s image-based REPL experience.
You can get relatively close by putting your application into processes and keep them in a collection or registry, then when you've found an issue and changed the code in your editor you can run recompile and relaunch affected processes. You'd lose a subtree in the application, and hence in-memory storage tightly coupled to those processes, but I find it rather easy to do it this way. Often you'll put in-memory data in separate caching processes and that reduces friction in restarting processes with application logic even more.
The 'let it crash'-foundation of the BEAM system isn't exactly tailored towards trapping and recovering from errors but if you really want to you could probably implement something like that with try/rescue but I suspect it would turn out quite obtuse compared to the CL mechanics.
> This describes fairly well what we've seen so far. The "lowtag" is those 4 bytes at the end of Lisp objects; also, we're clearly on a 64-bit architecture here, with N_WORD_BITS being 64, 8 of them reserved for the lowtag. It also details various values for the lowtag, the important one being...
Shouldn't that say bits? As 8 bytes would be the whole thing ayways.
A lot of people write Lisp interpreters, post it to HN and call it a day, but I think compilation is the far more interesting task for a Lisp implementation.
SBCL is great to poke around in and the decompiler’s great. Also check out ABCL, ECL for what compilation to JVM bytecode and C looks like.
These compilers were pulling heroic data structure optimization stunts in 1985, that no modern production compiler for major compiled languages can replicate in 2024.
The scheme compiler in Paradigms of Artificial Intelligence Programming by Peter Norvig is a great introduction to compiling Lisp. I was particularly impressed by it only taking a change in one line to go from regular recursion to handling tail call optimization.
The author correctly states that x86 has "lower bits at the beginning, bigger bits on the end", but, as you said, that's the definition of little-endian.
Apparently, I even made up an explanation for why "big-endian == start with the least significant bit" totally makes sense... couldn't convince the Universe this way either though.
Let’s make a trip back to the early and mid 1990s when C++ was a young language and Java was under development (it was released in 1995). At the time the most popular object-oriented language was Smalltalk, Common Lisp was at its peak of commercial popularity (largely in symbolic AI niches), and there were other niche OO languages that were heavily influenced by Smalltalk and Common Lisp (Objective-C in the NeXT world, Apple’s Dylan under development).
However, if I know my history correctly, Smalltalk and Common Lisp implementations were generally not cheap. Objective-C was largely tied to NeXT, which required buying into a niche ecosystem. Dylan was a casualty of Apple’s mid-1990s business struggles.
C++ benefitted from inexpensive implementations from vendors such as Borland and Microsoft, and it also benefitted from its close relationship with C, which already gained a foothold in the 1980s.
Java benefitted not only from Sun’s marketing machine, but also from its free compiler and runtime from Sun. Java overtook Smalltalk, despite the fact that Java lacks Smalltalk’s dynamics.
I think had there been solid cheap or free Smalltalk and Common Lisp implementations around 1992, Java wouldn’t have gained a foothold, though C++ would have still been very appealing to C developers, though perhaps if Objective-C were more broadly available to non-NeXT developers, it would’ve been a formidable competitor to C++ in terms of providing a “C with objects” environment.
GNU Common Lisp (https://www.gnu.org/software/gcl/) seems to have existed in various forms from the mid 1980s, but it was probably outclassed by the contemporary big tech companies.
Gnu CL also had problems as a Common Lisp implementation. It works (like ECL, which is related) by compiling to C, which is then compiled with a C compiler and the object code loaded into the image. This approach generates inferior code to that which an implementation like SBCL can achieve.
All the Lisp implementations needed more memory than C programs would, which limited their impact before 32 bit machines became standard. And by that time, C was strongly in place.
There's arguably also the part where it was, AFAIK, mainly driven by needs of Maxima and it's declaration (or so I recall) of focusing on CLtL first didn't help things either.
In comparison, ECL which forked from the same family seems to work out much better over time.
>I think had there been solid cheap or free Smalltalk and Common Lisp implementations around 1992, Java wouldn’t have gained a foothold, though C++ would have still been very appealing to C developers, though perhaps if Objective-C were more broadly available to non-NeXT developers, it would’ve been a formidable competitor to C++ in terms of providing a “C with objects” environment.
100% agree. i think it is also interesting that in comparison to before lisp community of today is so supportive of free software, standing more on the left wing of the open source community.
Interestingly enough, the free software movement came about due to Richard Stallman’s frustrations with Symbolics, one of the Lisp machine companies that came from the same MIT AI lab he worked in. It’s just that RMS chose to reimplement Unix (GNU) instead of creating a free Lisp operating system, though one could argue half-jokingly and half-seriously that GNU Emacs is that free Lisp OS.
The printer driver situation was RMS’s last straw, but he was already saddened by the demise of the sharing culture of the MIT AI lab once Symbolics and Lisp Machines, Inc. were founded, and he spent a considerable amount of effort reimplementing features from Symbolics:
I was about to say that, point to point. But, think about that. Unix was the cool thing from the 80's, so even if ITS was a hackers' dream, mixing Unix' utilitarianism with Emacs' Elisp hackability on top of that wouldn't be so bad under a proper GNU kernel + Emacs userland system.
There's basically one commercial Common Lisp left (Allegro from Franz), and they're focusing more on things built on top of the Lisp (like AllegroGraph), not the Lisp itself. So the free CLs have sort of won by default, and perhaps also by sucking the oxygen out of the room for expensive proprietary CLs.
Do you not count LispWorks in this? I haven't had a chance to play with the commercial Lisps myself, but anecdotally it seems to still have some currency.
I didn't, but perhaps I should have. There's a question of whether it's live enough to count, or if it's just in maintenance mode. The last release was in 2021, but there have been gaps that long before that.
Looks similar to me. LispWorks also had patch releases in between. Both are on the market for more than 35 years. Both are mostly written with CLOS and thus are especially to update with patches, additionally to the usual ways to update code (-> late binding). One just loads patches (which are mostly compiled Lisp code) into a running Lisp and that's it. Alternatively one can save a new image with patches loaded.
When one needs a patch or a feature, one would typically contact them directly. Both provide patches to the users.
Franz has made that simple for the user, they have a relatively continuously stream of patches, one can call an update function and it gets the necessary patches and installs them.
SBCL has monthly (!) releases, where the user (that's what I do) would typically compile it from scratch using the supplied sources. Updating is quick, around a minute for a recompile.
Java claim to fame was to be close enough to C++ but portable in binary form. This made it possible for millions of people to learn it quickly. Lisp is too distant from C++ to have had any change.
Which is funny because it was marketed as close, while in reality being quite different from C++: all methods virtual, interfaces, type erasure, no method overloading, no operator overloading, no templates, to name just first things coming to mind.
For entry-level programmers making the transition to Java, those deep semantics like "all methods virtual" didn't matter, nor did the lack of features like operator and method overloading or templates. They didn't really need those to build stuff.
The killer "like C++" feature that made adoption easy was the "C/C++ like syntax, with braces and everything".
Java took over by targeting large teams of inexperienced developers.
C++ took over by adding OO to C with zero cost abstractions, and then evolving into modern C++ (which discourages OO and encourages composition via template instantiation) without leaving programs behind.
Lisp has supported composition forever (it is one of the big advantages of functional programming), and the macro language is similar to templates. I think the big problem is that it is difficult to scale lisp projects with inexperienced developers (loses to java) and is not zero cost (has a GC, is often interpreted; loses to c, c++ and now rust).
And CCL doesn't even have an interpreter. Every form you type into the REPL is compiled. If you want an interpreter in CCL you have to write it yourself.
SBCL eval interprets until you hit a lambda form, at which point it will compile (last I checked). But you are much more correct than who you replied to :)
Unless I'm misremembering, this is only true in REPL, which should be better read as "SBCL is actually sometimes *AoT-compiling in interactive REPL sessions!!!"; #'LOAD-ing files would AoT-compile them by default.
It didn't use to - for various reasons, evaluator inherited from CMU CL had bitrotted at one point, and for many years every form more complex than essentially single function call resulted in compilation.
Few years ago there was work to fix the evaluator and now EVAL has limited evaluation back.
CMUCL's interpreter evaluated IR1 (the first intermediate representation of its compiler) IIRC, so it wasn't possible to have a truly compilerless CMUCL _and_ a functional EVAL. I believe this IR1 interpreter was dropped from SBCL very early on. When SBCL gained an interpreter again it was a simple metacircular evaluator a la SICP that was unrelated to anything inherited from CMUCL. (This is all as of 15 or so years ago, I'm sure things have evolved since then!)
SBCL has two evaluators now, sb-eval and sb-fasteval. I don't know how much structure they share. By default it builds with sb-eval, but this can be changed with options to make.sh. sb-ext:*evaluator-mode* is still :compile by default.
> I think the big problem is that it is difficult to scale lisp projects with inexperienced developers (loses to java) and is not zero cost (has a GC, is often interpreted; loses to c, c++ and now rust).
i think scalability problem is a bit overstated. the biggest issue with lisp becoming popular is that it doesn't have a large user base and is not a new language like rust. however despite being old, common lisp /still/ has so many 'novel' things in it that in metrics of programmer ergonomics and programming user interface make common lisp a big winner for me.
as regards zero cost, lisp programs can be made arbitrarily close to being zero cost, but here be deamons.
I still haven't found an environment which lets you build the system as seamlessly as SBCL. The REPL allows you to build the code at runtime and update the system in place. Since I was a solodev, bug fixing and building the thing without going on the wrong path was worth I think months and years of my time.
edit: Once you get the idea of a running Lisp image and you are updating the image it's a game changer. You are not writing a program, then compiling it and then running it anymore (even though those steps happen seamlessly).
Not obvious to me that it provides the same experience: "The image is a map of the memory after the code was loaded. Unlike in Smalltalk, Factor code is always distributed in files rather than in the image."
Factor provides support for both file and image based workflows. Factor is image-based and uses files when sharing, loading or refreshing vocabularies. however, just like lisp and Smalltalk, you can also save, share and restore system state via an image. For example you can save the whole state of Factor with the word
save-image
and you can restore that image using ./factor -i=path-to-image
This image workflow and interaction via the inspector provides a very similar workflow experience to both Smalltalk and Common Lisp.
Same here. It is just really annoying to me (ymmv etc etc) to use another env: it just is so much harder to inspect everything, debug easy and still end up with a really fast system because of sbcl.
What about state belonging to the older code, updating http handlers, user sessions, etc, on production? While living-in-the-image seems admirable, I always wondered how it blends with real production and real ops.
The only thing I heard about this is Paul Graham's story on fixing particular bug in viaweb while talking to the user, but that's just one story from 90s. Would be great to hear more with regard to more modern setup.
Not sure if I'm up to date, but jupyter (as good as it is) suffered from strange rules of evaluations for cells which makes it difficult to do anything large since side effects will impede thinking quick.
In my opinion it’s indeed like Greenspun’s 10th rule; it’s a vague imitation. If people weren’t so squeamish about parans the world would be a nicer place. The tech is there for a long time already.
I love lisp. I love the stability, the macros, the crash handlers, the repls, the elitism, slime, emacs, the works. But IMO sexpr is straight out terrible and hacking around that is a chore.
I tried to convince myself of that, but putting a function/operator in the same parens as the arguments and then making a special case for the first element in the parens will never make sense to me, since you can get the "expressive power" in any number of ways. Ruby makes more sense.
Yeah, it's interesting. I find them elegant and want all my languages to have that syntax (no deviation either like [] for lists etc). Guess we cannot argue about taste :)
You don't strictly need to learn Emacs, but you should probably just join the cult. I bit the bullet and did it about 20 years ago now, and it's been a good thing. Heck, I've got a non-programming dayjob now, but I've still got a keyboard that puts the control key under my thumb: https://x-bows.com.
Why would you have to learn emacs? You can largely batch compile and run lisp code same as any other language, if that is what you want to do. In many ways, the workflow can be very similar to python, if you don't want to get too advanced in the repl. And, to stress, you don't have to get too advanced to get going.
You can use emacs just for the REPL/debugger, and another editor for editing files. I did this for a couple of years, after previously using CLISP for its REPL. You can use the menus for things; the only keyboard shortcut you MUST know is CTRL-G (which if you hit enough times will cancel whatever operation you accidentally started).
Alternatively try geany-lisp[1], it was created specifically for those with emacs phobia. There are probably some sharp corners using it, so feel free to file issues if you run into any. It should work with Geany 1.26-1.38; when posting this, I just noticed that there is now a Geany 2.0; I'm assuming the major version bump broke plugins so it may not work anymore.
I normally dabble with CLISP (notably because of its built in readline), and if I could SOMEHOW get it to build with SSL, I'd play with it more.
This is what I stick at the top of my Lisp files when dabbling.
(defun l ()
(load "file.lisp"))
(defun e ()
(ext:shell "vi file.lisp"))
And then I just muddle my way through with a (e) and (l) cycle. Since I tend to not work on 10,000 line files -- this works fine (my files are < 5K lines). More than fast enough, and I get to retain any work variables within the image (plus my command history). It's quite effective really. (print...) debugging works because you can run your code at whatever granularity you like (and you can always refactor it to a finer grain if you want). Don't doubt it until you try it. Turn around is very fast.
That said, I discovered that for some reason, firing off something as simple as "vi file.lisp" from SBCL is stupifyingly complicated. I made several 10s google searches and 5s GPT attempts to make that work, but gave up.
So, can't say I can recommend that style of development in SBCL unless someone is willing to chime in with the, apparently, paragraph of code necessary to launch vi on a file.
EDIT: note that SBCL comes with UIOP included by default, but if you want to use SBCL's implementation-specific facility, it's only slightly different:
(defun c ()
(sb-ext:run-program "/usr/bin/emacs"
'("-nw" "file.lisp")
:input t
:output t))
Neither ends in a working development environment or even text editor.
Compare this to learning Python or Javascript or the C family. VSCode or even just your standard OS text editor is all you need. You can decide to start learning, find a tutorial on Youtube, and execute your hello world in 60-120 seconds.
> At one point many years ago, I wanted to check out this new Python thing. It was so easy, I've been hooked ever since.
IDLE is a underrated feature of Python. The default install includes a barebones IDE together with the REPL, making setup for beginners nearly zero effort. On the other hand, in lisp land, step 0 to learn most lisps is to learn emacs beforehand.
I use slimv with vim, but basic Emacs is not difficult: Ctrl-x b to swich betwen buffers, Ctrl-x s to save, Ctrl-x c to exit, Ctrl-c Ctrl-c to evaluate the CL buffer under Slime, Ctrl-x e to eval most Elisp and CL code functions.
Also, knowning CL makes learning Elisp a breeze, so you can customize your editor like nothing else.
I'm in the same boat. I just don't care about EMACS, don't like it, and there's little incentive when I'm just playing around. It's a shame because Lisp is really fun and interesting. I just don't want to spend so much time dicking around with an editor just so I can write some code.
LISP feels like both an "assembly language" and a "higher level language" at the same time. It's excellent for solving unusual problems, but it carries a lot of baggage when aiming for "usual" problems.
The big problem for Lisp is that it was too slow during the 80s and 90s. At that time, C and C++ took the market share and maintained it since then. New languages had to conform to C, since Lisp has its own independent way of doing things.