
SBCL – Past, Present, Future [pdf] - tosh
https://european-lisp-symposium.org/static/2019/rhodes.pdf
======
nextos
Tangential question, but what Lisp is the most promising one to invest time on
right now?

Common Lisp still has some activity, mainly coming from SBCL, but is a bit
stagnant. It has a fantastic literature and many mature implementations.

Scheme is great, but a bit too fragmented. Racket seems to be gaining some
momentum. The merge with Chez may be the tipping point to attract a critical
mass of developers.

Clojure has many interesting modern ideas, but I feel being so tied to Java
and the JVM has hurt a bit in the long run.

I miss a bit of innovation in the typed lisp area, like e.g. Qi or Shen which
never took off. Carp [1] looks nice.

[1] [https://github.com/carp-lang/Carp](https://github.com/carp-lang/Carp)

~~~
iLemming
I've never used Common Lisp, I know some Clojure, used emacs-lisp, tried
Racket, Fennel. I love Lisps. But I always wondered: What happened to Common
Lisp? Some of the opinions I heard:

\- CL is too big (compared to Racket and Clojure)

\- Lisp-1 vs Lisp-2

\- recursion discouraged (is that even true?)

\- There is a thing called "Lisp Curse"

And then seemingly language's popularity decreased. Let's ignore the fact that
I really like Lisps, aside that, are there any compelling reasons for an
average Joe programmer like me to start learning Common Lisp in 2019? Honest
question.

~~~
heisig
> What happened to Common Lisp?

Common Lisp is probably in its best shape ever. There is finally a good
package manager ([https://www.quicklisp.org/](https://www.quicklisp.org/)),
plenty of documentation ([https://common-
lisp.net/documentation](https://common-lisp.net/documentation)), and active
communities on reddit, #lisp and elsewhere
([http://planet.lisp.org/](http://planet.lisp.org/)).

There is and has always been a lot of FUD around Lisp, and Common Lisp in
particular. You even mentioned a few examples. My explanation on why this is
the case is that learning Common Lisp really takes some time and dedication,
but our lazy brains try hard to avoid that and find silly excuses instead (Too
many parentheses!).

> are there any compelling reasons for an average Joe programmer like me to
> start learning Common Lisp in 2019?

If you manage to use Common Lisp in a project, you get an enormous boost to
productivity. It is the best language for 'getting stuff done' that I know of.
Even if you cannot use Common Lisp directly, knowing it will stop you from
reinventing a lot of wheels. Greenspun's tenth rule is real.

------
mark_l_watson
I enjoyed the slides, and wish the talk video was available.

Now that I am retired, most of my side projects use SBCL, with a little
Haskell and other languages. I have great respect for the commercial Franz and
LispWorks products but for my projects SBCL works great.

I really enjoyed the history covered in his slides. I have lived through this
history since 1982, but unfortunately almost 90% of my paid work has been in
other languages. In our present time with great resources like Quicklisp, CL
Cookbook, etc. I think that the CL ecosystem and number of deployed projects
should be even greater.

------
auvi
FYI, if you do a git shortlog -sn | head -n4 in the SBCL repo you get (as of
now):

    
    
      4025 Douglas Katzman 
      2887 Stas Boukarev 
      1695 Nikodemus Siivola 
      1596 Christophe Rhodes

~~~
reikonomusha
Obviously (FTA) it wasn’t always tracked by Git.

------
ofrzeta
General side note: incremental slides should be "normalized" in a print
version. Not sure if any software supports this. It would probably afford some
manual tagging or grouping.

~~~
_emacsomancer_
These look like LaTeX Beamer slides and there is a way of normalising the
print version of incremental slides in Beamer.

------
dan-robertson
The slides are a bit tricky to read without the rest of the accompanying talk.
I couldn’t find a video with the brief search I had.

One thing the talk touches on is that writing a Lisp (cross-)compiler in Lisp
is hard. The slides suggest a few reasons and here are some more.

In CL much of the compilation process requires evaluating CL code. It seems
that you are therefore fortunate in using CL: you can just take the code and
eval it. But this doesn’t work because (wanting to be portable and supportive
of optimising compilers) many objects in CL can’t be inspected. For example
the following is valid code:

    
    
      (funcall #.(lambda (x) (1+ x)) 5)
    

In the code above one constructs a closure at read time, puts it into the ast
and the compiler must take this call to an opaque host-platform closure (which
evaluates to itself) and somehow marshal it into something for the target
platform.

Similarly one can do the following:

    
    
      (defmacro counter (name reset)
        (let* ((i 0)
               (incr (lambda () (incf i)))
               (res (lambda () (prog1 i (setf i 0)))))
          `(progn 
             (defun ,reset () (funcall ,res))
             (defun ,name () (funcall ,incr)))))
    

So to write a CL cross-compiler one must first implement a CL interpreter and
this interpreter must have objects with enough useful state that one can
correctly marshal closures from the interpreter into compiled objects in the
target system, even when those closures close over shared values.

One way to reduce this pain is to restrict the compiler to only compiling a
subset of the language, and then requiring that the compiler is written in
that subset. One may only compile more complicated programs when the host and
target are the same instance of Lisp.

However the “easily compiled portable subset” is probably too small and you
will end up with either a long bootstrapping process of increasingly useful
compilers or having to implement a lot of emulation of your target platform.
An example in the slides is that the result of (byte ...) is implementation
defined so you can’t just use the value from the host implementation. Another
example is floats. In CL there are 4 float types which are allowed to be equal
to one another in certain ways (typical modern implementations have 2 32-bit
and 2 64-bit; others might have some being decimal or packed into 63 bits), so
one cannot rely on the host’s floats behaving a certain way, so one instead
has to emulate the target platform’s float implementation to get reliably
portable results. Then again, maybe it is possible (but perhaps a bit painful)
to write the compiler without using any floats.

Another bootstrapping difficultly in CL is it’s object system which just makes
everything harder, especially if one wants to use objects for lots of the
implementation-specific types.

SBCL goes for writing the compiler in a subset of CL and the result is indeed
sanely bootstrapable. Other implementations typically require eval and some
runtime written in eg C and a slow process of evaluating the improving
compiler on itself to bootstrap the compiler. This is easier at first but can
lead to difficulties (some of which are described in the talk).

~~~
kazinator
> _the compiler must take this call to an opaque host-platform closure (which
> evaluates to itself) and somehow marshal it into something for the target
> platform._

If we're cross-compiling, we are almost certainly doing file compilation
whereby we have to externalize the compilation product to be transported to
the target where it is loaded.

If we use ANSI Lisp file compilation as our source of requirements, then we
only have to handle, as source code literals, objects which are
"externalizable". Closures are not.

See 3.2.4 Literal Objects in Compiled Files

Of course, you can adopt externalizable closures as a requirement in your own
compiler project, if you like.

------
mike_ivanov
arm: threads -- yes, please!

