Hacker News new | past | comments | ask | show | jobs | submit login
The Evolution of Lisp (1993) [pdf] (dreamsongs.com)
91 points by steven741 on June 29, 2019 | hide | past | favorite | 39 comments



To refresh your ideas on CL (I know I needed it): https://github.com/CodyReichert/awesome-cl There may be more libraries than you think. Also recently some usual complaints have been fixed:

- hard to set up environment ? -> http://portacle.github.io/ There's also Lem, and Atom is getting good. https://lispcookbook.github.io/cl-cookbook/editor-support.ht...

- horrible website ? http://common-lisp.net/ (they fixed it last year) Also http://lisp-lang.org/

- no cross-platform, featureful GUI library ? -> https://github.com/lispnik/iup/ (alpha) There's also Qtools for Qt4. And Ltk, but Tk lacks widgets.

- no object-oriented, generic math or equality operators ? -> https://github.com/alex-gutev/generic-cl/

- no decent library for numerical computing ? -> https://github.com/numcl/numcl

- waiting one month to publish your library into Quicklisp is too much ? -> http://ultralisp.org/

- lack of online beginner friendly documentation ? -> https://lispcookbook.github.io/cl-cookbook/


The lack of a really good open source IDE or IDE plugin is a real road block for a number of people. Pointing folks to Emacs or an Emacs derivative is is not the solution. I know there is a couple of IDE plugins for Clojure and I think it would help if there were similar options for Common Lisp.

I know there used to be a plugin for Eclipse (Cusp) but that appears to have died.


On my computers I use a commercial system with its own editor (Mac/Linux) and a free one (Clozure CL on Mac), also with its own editor.


why is emacs not a solution? the link provided by parent is excellent beginner friendly environment ( http://portacle.github.io/).


I'm not slamming Emacs. I'm just pointing out that there are millions of developers that won't use it and making it a, more or less, requirement to get into Common Lisp is not good.


> I'm just pointing out that there are millions of developers that won't use

You are just stating it. Without any reason. One can't argue against a statement, you have to provide reasons behind your statement for there to be an argument.

For example: "Asking people to use an editor where C-c doesn't copy and they have to relearn the keybindings is an unnecessary barrier of entry". I would agree with that and note that as a stop measure they can use the mouse for most tasks so they can postpone learning Emacs until they have learned CL.

If you are on OSX CCL has an alternate IDE that's pretty good. Otherwise you can pay Lispworks or Allegro. Yeah it would be awesome if there were a multitude of IDEs, but people are not going to write IDEs for environments they don't use for free, it is unreasonable to expect that.

I've never heard people say 'it is unreasonable to ask people to use Android Studio to develop for Android' so I'm not inclined to agree that it is unreasonable to ask people to use a Emacs to develop for CL. Especially after Shinmera has gone to great lengths to package a pre configured version. I know of a recent lisper that just uses Emacs for CL without previous Emacs exposure. It is an unnecessary barrier, but not more than a hump.


>You are just stating it. Without any reason. One can't argue against a statement, you have to provide reasons behind your statement for there to be an argument.

Not really - he already gave both. The argument is the already stated one "Emacs is not the solution to the problem of people coming to Lisp needing a good programming environment". And the reasons are what he wrote above: "because there are millions of developers that won't use it".

Now you might agree or not with the latter (I for one am in the category that wont bother to use Emacs), but it's a full argument.

>I've never heard people say 'it is unreasonable to ask people to use Android Studio to develop for Android'

People still say that, but that's beside the point.

Android Studio is a typical modern IDE, so not much of a barrier to entry. Emacs are much more arcane and idiosyncratic (and doubly so outside the Unix world, e.g. for the huge majority of Windows and OSX based devs).


it is unreasonable to ask people to use Android Studio to develop for Android

Obviously interesting platforms can get away with much more ridiculous requests such as "buy a Mac" or "program in JavaScript".


I won't use Emacs because I want a smart editor or IDE, not an editor construction toolkit. I have no interest in micro-optimizing my editor.


> I won't use Emacs because I want a smart editor or IDE, not an editor construction toolkit. I have no interest in micro-optimizing my editor.

Pretty much all "smart editors or IDEs" that are available are editor construction toolkits bundled with add-ons which make completed editors using the toolkit for a variety of special purposes, and with an ecosystem (first party and/or community) of additional add-ons available. Visual Studio, Visual Studio Code, Eclipse, IntelliJ and its various cousins, etc., all fit the model.

Emacs fits in with the rest in this regard (in fact, its one of the earliest inspirational models for what the field has become, probably the earliest example of the now-dominant model that is still actively maintained and widely used.)


To be honest, when you install Visual Studio or IntelliJ, you have all you need to develop code in the language you chose, with 0 extra customization. Of course, some amount of extra customization will usually follow, but that can come after you are more confident with the environment.

The same is definitely not true for VSCode, Emacs or vim, though perhaps bundles like portacle can offer a similar experience. I say this as an Emacs & IntelliJ user.


VSCode usually requires downloading the officially supported language bindings, which takes 2 minutes to figure out and is very intuitive/easy.


The difference between Emacs and those other tools is that some other people already used the tools to build things for me. With Emacs I have to build it myself, 90% of the time.

It's cool, it's interesting, it's tinkering and I don't want to do it :-)


until very recently emacs was the only editor that felt "smart". visual studio didn't really recognized grammar the same way. emacs semi parsing meant that TAB was a way to syntax check my code on the fly (50% of why I kept using emacs in college)


Eclipse has had incremental Java compilation since at least 2007. That's 12 years.


Yeah but eclipse had its share of wat. That made everything nice it brought insufferable.


There are millions of developers who won't use Windows, and millions who won't use MacOS.

I guess neither of those is worthy of using either because of this.


Because it's 2019 and most developers don't want to use a UI from the 1970s.


Until they do


See the "editor-support" link: there's the Dandelion plugin for Eclipse, although basic; I said Atom support is getting very good. There's Vim, a basic plugin for VS Code, and Lem, a self-contained editor ready to use for CL and many other languages (though it is "emacsy", same keybindings by default, vim-mode included). There's a Jupyter kernel and a terminal-friendly ipython-like repl.


Dandelion doesn't run on linux


I tried it on Linux though.


I watched a fascinating presentation and when I saw the speaker typing (lambda ...) and having it change to (λ ...) and not having to deal with parens.. I was jealous.

Does emacs do this?

EDIT: the presentation:

William Byrd on "The Most Beautiful Program Ever Written"

https://www.youtube.com/watch?v=OyfBQmvr2Hc


Yes, M-x prettify-symbols-mode, it's very cool. For parens, there's the built-in electric-pair-mode, paredit, smartparens, and lispy.


Some languages understand λ directly (and in all lisps that support unicode, you can explicitly alias it) - I've occasionally mapped AltGr+L to λ, when playing with those. It's really nice.

APL perhaps goes a little far, but I'd like to see more standardized symbols in programming languages. Imagine how nice it would be to succinctly use things like the actual outer-product operator ⊗...


> APL perhaps goes a little far

When I learned APL long ago in a languages course, it was on a dedicated APL setup with an APL keyboard.

In that context, it was a "how SHOULD things be" sort of design decision. I know I started off quickly thinking in terms of math and the language, not composing symbols.

Of course, APL keyboards have gone the way of the dodo and so the language has a higher barrier to entry.



- library download manager ? -> https://www.quicklisp.org/ (over 1500 libs)


I’ve always thought it unfortunate that fexprs were dropped for macros in lisp. For a language that touts homoiconicity and first class functions/objects, it feels strange that macros aren’t first class.

My understanding is that fexprs have some perfomance problems over macros, but I’m not really sure why or if the state of the art has advanced since then.

A really interesting evolution of lisp is Shen. The license is terrible and I seriously doubt it will ever achieve widespread use. And while the idea of a kernel lisp sounds great, in practice I doubt that it actually works.

The performance trade-offs that you’re going to need to make become very unclear when you have a language that can run on A python, CL, js, or whatever runtime. The JVM is successful because it unifies language and implementation, instead of fragmenting it further like Shen does.

Racket is the clear horse to pick in this race, I think. Hopefully Racket on Chez will improve the performance to close to that of native binaries. If so, I really think that it would be the last piece of the puzzle.


> A really interesting evolution of lisp is Shen. The license is terrible

BSD 3-clause: https://github.com/Shen-Language/shen-sources/blob/master/LI...

Of course that's not the end of the story, and I invite anyone curious to both read more, and to not be dissuaded by a lisper's funny opinions. There (Naggum, Hoyte, Stanislav) are a lot of lispers with funny opinions.

And one more point, not directed at you but to anyone who has strongly negative opinions about Shen's licensing: In this case I'll strongly defend Mark Tarver, who first had a nonstandard license. Then people voiced disapproval. Then he BSD'd it, and made a separate closed-source version. And still we have people saying "the license is terrible." You can get Shen 21 right now under the BSD license, _because you asked for it_ (not mruts in particular). Don't punish people for doing what you ask them to do! (not mruts in particular).


Fexprs have performance problems over macros because fexprs are interpreters whereas macros are compilers. A macro compiles the semantics that the equivalen fexpr would interpret. In 2019, compilation is still generally faster than interpretation.

Fexprs are theoretically interesting in one regard: they can avoid hygiene problems. A fexpr explicitly evaluates material in some explicit environment, so it can work out which environment is used for what. It has straightforward access to its own local material that doesn't interfere with the interpretation of the incoming form, that is done using its own environment. A loop that has an implicit counter which is invisible to the program, if implemented as a fexpr, can just use the fexpr's own local variable as a loop counter, and thus doesn't have to insert a hidden gensym into a code template. A fexpr is not confused if the interpreted form defines a local function called list, even if the fexpr uses list. Macros combine everything into one clump of code, so achieving the same environmental separation requires significant hoops (which are omitted entirely in traditional Lisp macro systems).


I'm not sure what the relationship between fexprs and homoiconicity is supposed to be. If anything, the arguments to an fexpr in a Lisp with lexical scoping are opaque and cannot be manipulated, since they are wrapped up in a closure by the interpreter. Plus, not every macro can be written as an fexpr (for example, LOOP). In contrast, fexprs seem to work well in PicoLisp because it uses dynamic scoping exclusively.

Traditional Lisps are only functional in the first-class-function sense, but not in the mathematical sense where functions give the same output for the same inputs. Furthermore, they use an eager reduction rule for function applications, and not lazy reduction. Put together, programmers rely on knowing the implicit control flow so that effects occur in their proper order. All fexprs really do is let the programmer define constructs that manipulate this control flow in non-standard ways. If you think about it: why have fexprs be part of the language if all you need to do is wrap each argument in (lambda () ...)?[1] (If you make { and } be a reader macro for (lambda () (progn ...)), then you would possibly save a little typing.)

(In Haskell, the eager evaluation is sort of abstracted into the IO monad, and pervasive lazy evaluation is that everything is an fexpr.)

I think of Lisps (Common Lisp in particular) as actually being two languages joined together: the metaprogramming (macro) language and the core Lisp language. It can be a bit confusing distinguishing them because of how metaprogramming stages are interleaved and how the metaprograms are written in Lisp itself. The macro expansion language is a simple tree rewrite language, where for each node, the system looks to see whether there is a corresponding rewrite rule (a macro), then replaces that subtree with that rule's result. This continues until quiescence, at which point the completed program is passed to the core Lisp interpreter/compiler.

One strange Lisp variant is Mathematica, where the entire language is based around these tree rewrite rules --- it's all basically macro expansion. In principle, this means Mathematica does not need macros, but in practice it can be very difficult to control the order of the rewrites.

Lisp macros are first class in a certain way, at least in that you can refer to them with (macro-function ...). I know what you mean though, how you can't just apply #'and to a list of booleans and get the actual boolean "and." I sort of think that Common Lisp should have distinguished between the function and macro slots of a symbol. That way you could define both a macro and function version of a symbol if they both make sense. This would also better distinguish the metaprogramming and evaluation stages.

Something I wish were true about Lisp is that eval(eval(x)) == eval(x) (at least if the expression x has no side effects!). There are a few things in the design that make this difficult or impossible: (1) evaluating a symbol gets its lexical or global binding, so double evaluation of 's will give the value of s and (2) there is no distinction between a form and a list literal, other than through quotation. A way around (1) is to use MDL-style explicit accessors to get the bound value of a symbol and make symbols self-evaluating (so .x gets the lexical value of the symbol x), and a way around (2) is to have tagged lists like in MDL, where the tag is part of the pointer to the first cons cell. Mathematica does this too, to some extent. For example, {1,2,1+2} in Mathematica is short for List[1,2,Plus[1,2]], which evaluates to List[1,2,3]. "List" is not an element of the list, but rather the tag (called the "head"). (There's a hole in my proposal, however: if the lexical value of a symbol is a form, then double evaluation will evaluate to what that form represents. I don't know away around this. This might simply be a formal logical contradiction from trying to collapse different levels of representation into one, a Russell-like paradox.)

I'll just throw something else I've been thinking about here: the old Lisp-1 versus Lisp-2 debate is about whether a symbol should refer to a single thing, or whether using it as a function vs using it as a value should refer to different things. The utility of two namespaces seems to be than when Lisps were dynamically scoped, setting the value of a symbol wouldn't cause a clash if that symbol happened to be used as a function elsewhere. However, with the advent of lexical scoping in Lisps in the 80s this became a bit less of an issue, though it remained in the design to avoid collisions in macro expansion. I wonder if the "correct" Lisp-2 ought to be global vs local bindings for a symbol, at the cost of needing to annotate symbols like .x and ,x for local and global bindings, like in MDL. As a convenience, (f .x) would be short for (,f .x).

[1] This is the paper in which Kent Pitman argued against fexprs in Lisp: https://www.nhplace.com/kent/Papers/Special-Forms.html


> One strange Lisp variant is Mathematica

I wouldn't call Mathematica a Lisp, given that it is a runtime rewrite system and not based on a defined evaluator (which also makes Lisp relatively easy to compile and also compilable to relatively efficient code).

> That way you could define both a macro and function version of a symbol if they both make sense.

There is a variant of that in CL called 'compiler macro'.

> The utility of two namespaces seems to be than when Lisps were dynamically scoped

One still has the same problems with global symbols in Scheme.

   (define (foo list) (list list))
Here the parameter variable LIST shadows the global function LIST.


> Mathematica

It's derived from Wolfram's conception of Macsyma, and it is designed so that all code is made out of numbers, strings, symbols, and arrays (sure, not cons cells). I guess it's nice to know you wouldn't call Mathematica a Lisp, but I didn't say it was strictly in the Lisp family, just a strange variant. (Does this mean you do not consider PicoLisp a Lisp either, since it is not really compilable, as far as I can see? Or is it fine to you because it has a defined evaluator?)

Mathematica does have a built-in compiler, but granted it is unclear exactly what subset of the language it is compiling.

> There is a variant of that in CL called 'compiler macro'.

I am aware, but that certainly does not have the same effect because CL is not required to invoke your compiler macro. This means you cannot rely on compiler macros for control flow manipulations, like what 'and' requires. (I know you said "variant," but they are more of an optimization than what I was talking about.)

> (define (foo list) (list list))

As I said, with lexical scoping "this became a bit less of an issue." What I had in mind is that at least any of these kinds of shadowing errors will be entirely local. You won't have the problem that by using 'list' as the name of an argument, if it were dynamically bound in a Lisp-1 it would mess up any function you might call that itself uses 'list'. Lexical scoping at least prevents this spooky action at a distance that you would need a Lisp-2 to prevent under dynamic scoping.


> I guess it's nice to know you wouldn't call Mathematica a Lisp,

Thanks!

> but I didn't say it was strictly in the Lisp family, just a strange variant.

Right, that was my point, I don't consider term rewriting systems as Lisp variants, even though they may have numbers and strings as data types. The actual execution engine is too different.

Fexprs had the interesting property that it sees the actual source at runtime (which makes it very different from lazy evaluation or wrapping things in lambdas), which appeals especially to users/implementors of computer algebra systems (see Reduce written in Portable Standard Lisp, which provides Fexprs) or other math software dealing with formulas (like R, which has a feature similar to fexprs).

> Mathematica does have a built-in compiler, but granted it is unclear exactly what subset of the language it is compiling.

From what I read of its very unspecific documentation, I doubt that it does the term rewriting at compile time (like Lisp does macro expansion at compile time) - does it? To me it looks like the compiler targets numeric code and basic control flow...

They probably have better documentation of their language implementation, internally. Or is there an externally available document, which actually describes their implementation?

Mathematica as a rewrite language

https://www3.risc.jku.at/publications/download/risc_342/1996...


I fixed that in TXR Lisp. Macro bindings and function bindings in the global namespace are separate and coexist. The mboundp predicate is provided for testing for a macro binding, which can be undone with mmakunbound, analogous to fmakunbound

https://www.nongnu.org/txr/txr-manpage.html#N-0384A294

TXR comes with function complements of if and and and some others.

  1> (and (prinl 'foo) nil (prinl 'bar))
  foo
  nil
  2> (mapcar 'and '(nil t nil t) '(nil nil t t ))
  (nil nil nil t)
Unsurprisingly, TXR Lisp has no compiler macros. If you want to speed up a function using a macro, then just write a same-named macro.

TXR Lisp macros can decline to expand by returning an output that is eq to their input. If a macro declines to expand, and a function exists, then that form will be treated as a call to that function, naturally.

When a TXR Lisp lexical macro declines to expand, then a same-named macro in an outer scope (or else the global environment) is thereby effectively unshadowed and gets an opportunity to expand the input. This is very useful and exploited in the implementation of tagbody, which it simplifies. http://www.kylheku.com/cgit/txr/commit/share/txr/stdlib/tagb...


Nice discussion starting around page 23 discussing how the end of / limitations of the PDP-10 led to an explosion of innovation including the development of continuations (still so unfamiliar back in 1993 that they had to be explained in simple language).


---- 7 Conclusions [The following text is very sketchy and needs to be filled out.] We’re still working on this. [End of sketchy section.] ----


I have this printed in a three-ring binder, lying on the floor under the desk here.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: