Hacker News new | past | comments | ask | show | jobs | submit login
Common Lisp vs Racket (gist.github.com)
250 points by ducktective on Sept 5, 2022 | hide | past | favorite | 190 comments

I'm a heavy user of Common Lisp, and I only dabble in Racket from time to time. While Common Lisp is my tool of choice for a lot of reasons stated in the post, since the post largely skews favorably toward Common Lisp, I'll offer two things that Racket shines at:

1. Documentation

Racket has a beautiful documentation system called Scribble [0]. Almost all documentation written for Common Lisp is ugly, inconsistent, and doesn't follow any sort of standard. Common Lisp has a gazillion "documentation generators" which do little more than slurp up all the documentation strings and barf them back out as e.g. HTML. Common Lisp users tend to just rely on READMEs + jump-to-definition to learn how a project works. A few Lisp programmers (e.g., Shinmera, Edi Weitz) have made their own documentation systems that look better than average, but are still at least an order of magnitude less capable than Racket's offering.

Racket's documentation system is aesthetically pleasing, very well organized, and makes the programmer want to write long-form documentation. Moreover, Racket (the language) sets a superlative example of what its users can strive for in terms of quality of clear, no-nonsense writing.

2. Languages and DSLs

One of Lisp's (any dialect) defining features is the ability to build domain-specific languages, syntactic extensions, and the like by way of a special kind of function that can run prior to the code's execution. These special functions are called "macros", and across Lisp dialects, there are lots of different takes on them.

Common Lisp gives you a grab bag of facilities for writing code that lets you change syntax at different levels: DEFMACRO (scoped variant: MACROLET), DEFINE-COMPILER-MACRO, DEFINE-SYMBOL-MACRO (scoped variant: SYMBOL-MACROLET), SET-MACRO-CHARACTER, and a few others. At best, I'd consider these only loosely related, and they all work on really "raw" data representations (parsed S-expressions and character streams). In part because of this, ANSI Common Lisp doesn't provide access to any notion of environment, source location, etc. when writing new macros.

Macros defined by the above operators "live" in different places and are compartmentalized in different ways. There are some Lisp libraries that help you tame definitions and keep them fenced in, like NAMED-READTABLES which helps you organize where reader macros (macros which change how Lisp is parsed at the character-level) live and how to turn them off and on in a sane way.

Contrast with Racket which has explicit documentation on how to write macros [1,2], how to debug them [3], and how to encapsulate them a new DSL embedded in the system [4]. Racket—as a technical system—clearly defined the phases of compilation, what objects are manipulated during the phases (e.g., "syntax objects"), how to provide debugging information, and so on.

In the end, Common Lisp's system for making DSLs feels a little runny. It's supremely hacker-friendly, but it's a bit more difficult to deliver, in one fell swoop, a language as a library. Racket, on the other hand, at the expense of having increased rigidity and needing to read two or three complete chapters of a manual, you get a way to neatly bundle up a language. This is why Racket had nice DSLs like Typed Racket, Hackett, Scheme RnRS, etc.

I use Common Lisp and I develop relatively large DSLs in the language, and even if I want to, it's hard—if not impossible—to write large DSLs that have nice error messages, source locations, etc. without writing an entirely new "reader" (lisp's term for a code parser) myself every time. Smaller DSLs where user code doesn't exceed more than a few lines? Examples like:

- regex syntax

- string interpolation syntax

- a pattern matching feature

Common Lisp is great, and you get a ton of mileage out of the facilities it provides. But larger ones? Examples like:

- an embedded language for writing documentation with markup

- a complete language with lazy evaluation semantics

- a language for non-deterministic programming (like SCREAMER, Prolog)

- a language for strictly typed functional programming (like Coalton, Hackett)

Tackling these in Common Lisp requires ambition and a very steady hand, whereas Racket's overall system (language, implementation, documentation) makes that a comparatively painless exercise.

[0] https://docs.racket-lang.org/scribble/

[1] https://docs.racket-lang.org/guide/macros.html

[2] https://www.greghendershott.com/fear-of-macros/

[3] https://docs.racket-lang.org/macro-debugger/index.html

[4] https://docs.racket-lang.org/guide/languages.html

Unfortunately, I think perhaps might have come to a different conclusion if you'd tried Racket's so-called "hygienic" macros in anger. I've found them difficult to properly understand, not very usable in practice, and overly complex for non-trivial cases.

This is not what you'll usually read in forums, I think DEFMACRO is the better design here.

Racket's macro system is quite baroque and supports a lot of things. You don't need to use hygienic "syntax-rules" at all. Consider writing your own transformer [1], which can be an arbitrary lambda function without restriction.

Or, import defmacro [2] and use that.

As can be seen, Racket actually provides more power and choice in this regard, including support for Common Lisp's way.

In any case, I make no claim to either being better or worse. I just think Racket is more principled in its mechanisms for defining and delivering a DSL to a user.

[1] https://docs.racket-lang.org/guide/proc-macros.html

[2] https://docs.racket-lang.org/compatibility/defmacro.html

Agree with "Racket is more principled..." Where I differ with you if that I'd say that Racket theorically provides more power, but the system is not tractable in practice. I suspect all large Racket programs were written by people with PhD's from Northwestern.

BTW, Racket's defmacro is only partially compatible with Common Lisp defmacro. It doesn't really work. I even seem to remember (? from long ago) that it's impossible to implement a genuine CL defmacro in Racket.

> all large Racket programs were written by people with PhD's from Northwestern

not totally wrong but i did write this https://github.com/disconcision/fructure with ~2 years programming experience. implementation features some nice simple define-syntax-rule macros which i maybe never wouldve managed with defmacro

You are hanging out in the wrong forums :-)

The `defmacro` model is "simpler", true - but it is fragile and breaks down when you want to do complicated things.

Check this blog post written by someone, who knows hos Common Lisp.

EVAL-WHEN considered harmful to your mental health https://fare.livejournal.com/146698.html

More to the point of how simple (and easy) defmacro is compared to syntax-case and syntax-rules, I like another fare post: https://fare.livejournal.com/189741.html If you're as smart as fare, it "isn't too hard to translate it".

Though I concede this case could be analogous to trying to write a Doubly Linked List in Rust, it's not enough to say definitively yea or nay. So I'd more like to see a concrete case you have in mind where the tradeoffs are squarely against CL. Like, the loop macro is more complicated than any I have written myself, but you can break it down, and it's not that bad -- I think Norvig's version is pretty neat to study: https://norvig.com/paip/loop.lisp This project (and it's not the only one!) adding C syntax to CL https://github.com/y2q-actionman/with-c-syntax I think is more complicated than loop, and is sort of where I'd put the level of "complicated things" at that I'd like to see an example from the Scheme world that clearly shows defmacro's deficiencies on some metrics. (Fewer bugs? Easier to add new features to? Shorter code? Faster performance either at compile time or runtime or both? Easier to understand or faster to implement for people with similar levels of skill in the language?)

The final `syntax-case` version:

    (define-syntax (nest stx)
      (syntax-case stx ()
        ((nest outer ... inner)
         (foldr (lambda (o i)
                  (with-syntax (((outer ...) o)
                                 (inner i))
                    #'(outer ... inner)))
                #'inner (syntax->list #'(outer ...))))))
looks fine to me. Using #` and #, and defining `reduce` one could make a version that looks like the Common Lisp implementation.

Note that the `nest` form just rearranges its input - so there is no problems getting the scope of identifiers correct. That is: we are well within the comfort zone of `defmacro`.

With regards to the `syntax-rules` solution: Fare refers to "when I learned Scheme" and pre-R6RS there were no `syntax-case` in the specification. In practise all implemenations did have macro systems more expressive than `syntax-rules`, but it was fun to see whether something was possible with plain `syntax-rules`.

Right, it's fine, and is a pretty basic macro. Doubly linked lists are pretty basic data structures too, even the Rust versions once you figure it out. I like your sibling comment making it look like the CL version. I still want to know in more detail though why you think that doing things this way instead of the CL way is less likely to be "fragile and break down" for the complicated stuff, it would help to have a specific complicated example to showcase. Perhaps the linked https://github.com/disconcision/fructure in another comment would be a good study? The author there claimed they might not have been able to manage with defmacro, maybe someone familiar with both could articulate the challenges in detail. Is it just an issue of some things benefit a lot from pattern matching, and if so, does using CL's Trivia system mitigate that at all (in the same way that using gensym+packages+Lisp-2ness can mitigate hygiene issues)? Or is it more when there's a point of complexity where you need to do code walking, and this is more straightforward in something like Racket?

It's not about pattern matching: you can use pattern matching to destructure the input of a syntax transformer (macro) in Common Lisp too. Using pattern matching makes the macros easier to read though [but that is of course a matter of opinion].

The "special sauce" is being explicit about compile time vs runtime (aka using phases).

The paper "Composable and Compilable Macros: You Want it When?" by Matthew Flatt describes why conflating runtime and compile time is a problem.


Here is a version that follows the Common Lisp solution. Since `reduce` is the standard library of Common Lisp, I implemented the case I needed.

    #lang racket
    #;(defmacro nest (&rest r)
        (reduce (lambda (o i) `(,@o ,i)) r :from-end t))
      (require racket/match)
      (define (reduce f xs)
        (match xs
          [(list x0 x1) (f x0 x1)]
          [(cons a d)   (f a (reduce f d))])))
    (define-syntax (nest r)  
      (reduce (λ (o i) #`(#,@o #,i))
              (cdr (syntax->list r))))
     (list 'foo)
     (list 'bar)
     (list 'baz)

To each his own, obviously, but I've programmed a fair amount in CL and Scheme and vastly prefer syntax-case to defmacro. There is a little more cognitive overhead, but it all makes sense. Syntax-case macros are easier to write and read, from my point of view.

And Racket has syntax-parse, which is even nicer, I seem to recall.

> I use Common Lisp and I develop relatively large DSLs in the language, and even if I want to, it's hard—if not impossible—to write large DSLs that have nice error messages, source locations, etc. without writing an entirely new "reader" (lisp's term for a code parser) myself every time.

We have to go deeper! Could you write a DSL in CL to help you write new DSLs as easily as in Racket?

I have a project to do that, https://github.com/ruricolist/vernacular, although I'm not working on it right now.

This is what Racket people have done to write Typed Racket in it. I think it is called Turnstile: https://docs.racket-lang.org/turnstile/.

I don't think typed racket uses turnstile? At least, typed racket is far older than turnstile is.

Huh, I was under the impression, that it was based on turnstile and was sure, that I saw that somewhere, but now I cannot find any indication of it being the case. So maybe I am mistaken.

Nevermind then, turnstile is still something fitting for this topic : )

You are correct; typed racket does not use turnstile.

I’ve never used Racket, but reading your description of the documentation (and Scribble) makes me want to check out that aspect of it. I wonder how it compares with rustdoc? I do mostly Rust these days and its generated documentation is usually really good.

The generated Rust docs... are not very good. And on more complex types you have a ton of traits, with methods and all their lifetimes and generic parameters which are really hard to follow if you're unfamiliar with a library or Rust in general. Also, even the biggest libraries (clap for example) reserve a couple paragraphs for the front page prose and leave the type docs to speak for themselves... which is just not good enough.

I would go as far as to say that a non-negligible part of why people are intimidated by Rust is the rustdoc format and layout. Thankfully it makes up for it with an incredible Rust Book.

To me the gold standard of docs is Elixir followed by Racket. Even Go's docs are better than Rust, but that's mostly because of a much simpler type system, so simpler autogenerated docs for structs and their methods.

I really got annoyed by the Go package docs after I programmed in Go regularly for about a year. They were just so artless and flat. Each one is just a big list of all the stuff in the package, with no indication that the code has any structure to it at all. It was a few years ago though; perhaps it has gotten better?

Scribble beats rustdoc hands–down when it comes to authoring documentation, but even with Rustdoc you can do a good job of it if you try (http://db48x.net/reposurgeon/rust-port-docs/reposurgeon/path...). I’ll have to check out Elixir next.

Go docs are pretty terse and lifeless, agreed.

Here's an example of Elixir docs: https://hexdocs.pm/ecto/Ecto.html

Note the layout, the amount of prose to introduce a module and the function documentation that is not limited to just a quick line but often has multiple examples and tries quite hard to be clear and extensive.

There is no reason Rust docs couldn't be that good, but the example has to come from the top: improve the std docs, and everything will follow suit.

ecto and phoenix are shining examples of well written docs but its hardly the standard for elixir as a whole. many of the docs for most libraries are just lists of functions. some might have a short description though you're usually relying on a dialyzer definition at best.

Fair enough, but neither Ecto nor Phoenix are part of the stdlib, but being written by the some of core language maintainers with such a focus on documentation, they provide a great example for everybody to follow. Having excellent docs on the core libraries, the main SQL datamapper and web framework is the reason why Elixir is so loved even though it's quite a niche language.

As a new Rust fan I would just love to have the std docs half as good as Elixir's. I constantly get lost in rustdoc.

> Each one is just a big list of all the stuff in the package, with no indication that the code has any structure to it at all.

I'm afraid, much Go code doesn't have that much structure. At least none that you could express in a machine readable form.

Oh, I see. Scribble looks a little like TeX, but actually it’s an alternate read syntax/dsl. The first example where they show off the real utility of it is:

    @tabular[#:sep @hspace[1]
             (list (list @bold{Animal} @bold{Food})
                   (list "mouse"       "cookie")
                   (list "moose"       "muffin"))]
which really drops it in your lap. “tabular” isn’t from some fixed list of keywords, it is an ordinary racket function that you can call, with ordinary arguments that can be strings, lists, etc. Presumably there will be an example in the next page or two with @define[…] in it to close the loop.

And the table of contents promises literate programming as well. Someone was really thinking when they put this together; I really like it.

The documentation generated for individual functions (such as <https://docs.racket-lang.org/scribble/base.html#(def._((lib....>) is fairly nice as well. Readable, linked together by hyperlinks on the type predicates as well as hyperlinks in the body of the documentation itself; it’s pretty good. Rustdoc puts links to the source code for each one though, and I would soon miss that.

Good point; it looks like Scribe even more than TeX.

The only other equal I've seen to scribble around is Python's Sphinx. I use Sphinx for all my projects no matter what language they're in because it's so good.

I've loved using Racket docs while working through HTDP. Very clear pleasant to search/look through. Great definitions both with formal specification as well as practical examples, and all the types etc are nicely hperlinked together.

As someone trying to understand what people find special about Lisp, this was helpful. I get it now about macros, but doesn't modifying the language in the compiler phase just complicate solving the actual runtime problem? I feel I would get distracted into writing my own DSL. I've never wanted to modify the language as I was writing in it. Now I'm curious.

Don't think of it as modifying the language. Think of it as adding new ways of expressing ideas in the existing language.

These sorts of languages/language extensions pop up all the time in "ordinary" programming. For example, common DSLs we see often.

- regex is a language for string patterns

- JSON is a language for simple nested data

- SQL is a language for expressing DB queries

- OpenMP contains (essentially) a language for parallelizing loops

- (this might sound crazy because it's so common and basically built in standard in almost all programming languages:) infix notation for math

The problem is that many of these are absolute kludges in existing languages. Anybody who has had the displeasure of trying to interpolate DB table names into SQL SELECT strings knows this. Most languages try to make it a bit better by adding preprocessors, using builder-patterns, abusing reflection, or otherwise. Looking under the hood of some languages' implementations of these ideas will often lead you to see the most cursed of programmer things in existence.

Each of those languages (or close syntactic cousins of them) can be written directly in Lisp, as a library, without shoving them into strings, or using external tools, or making custom file formats. They call can exist as a part of the Lisp language as a syntax extension.

Making DSLs requires exercising that muscle a bit; it's non-trivial to design a new and useful language that fits orthogonally into an existing system.

It's like being introduced to OO programming where, at least for me, it made sense but it wasn't immediately obvious what a good object model for a problem might be. It took me lots and trial and error to understand how OO works in "the real world", where it's good, and where it falls short.

Once you get they hang of making DSLs—or even just small but powerful syntax extensions—it's an amazing tool to have at your disposal.

My second thought, which I excluded for brevity, was aren't functions and libraries language extensions? I mean technically you could write C without malloc, but it wouldn't be very useful. Regex is a great example of an embedded DSL, but I've never thought "I wish I could write a macro to change it's behavior on the fly in this special case". The last time I looked at this issue I saw that sure, if you don't have generics, it would be helpful to have a language that could write typed functions for you. That makes sense. Changing the parser and compiler behavior before any runtime code executes could be very interesting. There are certainly enough people waxing poetic about the expressiveness of it to make it worth learning.

Again, you don't want to change regex, you want to create it in the first place.

Maybe a simpler example, what if I wanted this in (pseudo-)C?

    with(s = fopen("/foo/bar", ...)) {
When the "with" clause ends, the file will automatically be closed. This is more or less bog standard RAII expressed as a new language construct.

In Lisp, this is a feature you could build and provide as a library feature. In fact, we can do it, straight here in an HN comment.

    (defgeneric destroy (x)
      (:documentation "A destructor for use with the WITH feature".))

    (defmacro with ((var = val) &body body)
      "Python-style RAII macro. VAR will be
       bound to the value of VAL, and upon
       exiting scope, will be destroyed with
      (assert (eq = ':=))
      (let ((orig (gensym)))
        `(let* ((,orig ,val)
                (,var ,orig))
               (progn ,@body)
             (destroy ,orig))))
We can register a new destructor easily for open file streams:

    (defmethod destroy ((s stream))
      (close stream))
(We could add some more error handling, etc. to make it more robust.)

Now, we effectively have RAII in Lisp:

    (with (s := (open "/foo/bar" ...))
      (write-line (read-line s))
This is now real Lisp code you can actually run, completely integrated into the language, and nicely orthogonal to the existing gamut of features.

This is a concept we can use almost immediately after learning about it, instead of waiting for Python's PEP 6363 to pass with consensus from people who may have different programming goals than you do.

I can see a lot of uses for macros, but in this case C# has a "using" keyword and the designers have put a lot of thought into it and how it is called in a wide variety of situations. Lisp was a language developed in a world when people worked on their own cars, like me. I would have loved Lisp if I had found it earlier, like about the time I rebuilt my VW engine in the 80's.

Many of the language simplifications on more recent C# versions, code generators, Rosylin analysers, expression trees are all features that in Lisp boil down to one thing, macros.

https://github.com/norvig/paip-lisp - Peter Norvig's Paradigm's of AI Programming

https://github.com/norvig/paip-lisp/search?l=Markdown&q=defm... - all references to defmacro in the markdown files

Chapter 3 shows a simple macro, just adding a while loop to the language.

Chapter 9 shows some more complex ones, including a with- macro and a grammar compiler macro.

Chapters 11 and 12 show the development of a Prolog implementation in CL using defmacro to aid in compilation again in Chapter 11.

Chapter 12 shows adding an OO system to the language. Technically not needed with CLOS, but a good demonstration of what can be done with macros.

There are other examples (why I included that search link). Macros let you change the language in ways large and small. Many uses could probably be replaced with functions, though you'd end up having to throw a bunch of quotes about or closures in order to delay processing things. You'd also be delaying that to runtime, which incurs its own penalties compared to macro expansion and compile time (if a compiled CL).

I like this but it seems like there would be many inconsistencies to building your own language without an actual language specification

So, as a designer of miniature languages and syntactic extensions, specify it!

A well written function must also abide by some promise or specification—contracts on the inputs and invariants on the outputs. I don't see why new syntactic structures are much different from that.

I think there's undue spookiness around macros among lots of programmers, especially those who haven't sought to solve some problem with them.

- "It's too powerful for the common person."

- "It's way too likely to make code confusing."

- "It's invariably too hard to reason about."

- "It won't work on a team because everybody is going to make their own weird incompatible language and it's just not scalable."

All of these sentiments are complete bunk, and I argue are against common wisdom around other programming abstractions, like functions or classes.

Extending a language with macros isn't all that different from extending a language with your own functions. At least in principle.

And a lot of what tends to be macros in Lisp turns into functions in eg Haskell. Eg you can define your own new branching and looping constructs (like 'if' or 'while') etc as functions in Haskell, but in Lisp these need to be macros.

Unlike call-by-need evaluation, macros provide the opportunity to compute things at compile time (think lookup tables) or analyze the arguments at compile time (think DB query syntax checking).

In Lisp, you can also define branching and looping as functions, but you need to concede and have the arguments themselves be (lambda) functions.

    (defun while (c body)
      (when (funcall c)
        (funcall body)
        (while c body)))

Yes, I know that laziness doesn't take care of all macros. That's why even Haskell has macros.

Yes, if you wrap your arguments in lambdas, you can write loops like that.

Though to be as performant as the built-in constructs, you need a rather clever compiler that 'removes' the lambdas again.

I don't know how this exactly fits, but you can do crazy stuff like query a SQL db in the compiler to check your Racket program for auto generated variable names. This extends the editor tooling to highlight names which don't match to columns in the DB tables.

I wrote a post about this here[1]

[1]: http://tech.perpetua.io/2022/01/generating-sqlite-bindings-w...

What about package management? I feel Racket’s is more standardized. I get pretty confused trying to install some requested Common Lisp packages. Browsing to the package websites, there’s no commonality, and there’s even usually no information on how to install them.


Quicklisp has become the main way many (most?) people handle installing a library for Common Lisp. Under the hood, ASDF is used for defining the system and specifying how it should be compiled and such as has also been standard in the community for a very long time.

Yeah, this is correct. ASDF and Quicklisp are essentially pervasive in the Common Lisp community now.

If you see a library you want to use, like the HTTP client drakma, then you just start your Lisp and write

    (ql:quickload "drakma")
and it will download it (if needed), install it (if needed), compile it (if needed), and load it. It's really easy, and completely changed the game for Lisp about 15 years ago.

Wow, that sounds like a great feature. Wonder if any other language has similar.

Some of these objections are the author taking well-known properties of CL as must-haves, and evaluating other languages against them.

Considering them must-haves is perfectly reasonable for a programmer who wants to work in that style. For example, being able to (put roughly) modify any part of the system at any time, and let the semantics decide what happens.

Scheme/Racket absolutely is not CL, and Racket has been moving further away from CL than Scheme already was. I've worked on critical production systems in both Scheme/Racket and CL professionally, and they are very different languages, and people wouldn't confuse them if they didn't both use S-exp syntax. (Individual Lisp family developers are also different. There are some shared memes, but you don't end a Lisp family language expert today, and shunning that $500K Python job, without being stubbornly self-motivated and/or self-destructive.)

One of the keys to learning Scheme originally was to put aside my biases about what was important, try to use and then understand the idiomatic way, so that you can then decide what makes sense.

(Examples: My own first bias when learning Scheme was trying to micro-optimize for runtime as I coded, like I would in C, but I didn't know what cost model I should reason about. If you look at the first Scheme code I wrote, https://pkgs.racket-lang.org/package/html-parsing , it's a little like a C programmer with some non-Scheme Lisp experience.) One of the next barriers was learning to avoid mutations and premature exits, and to leverage tail recursion. Before I learned that, I couldn't have told you that can make code easier to read.)

When we were first learning computer things, there was an implicit suspension of disbelief or acknowledgement of not knowing. As we grow more knowledgeable, we have to be careful not to plateau or become rigid, assuming we know the right way, and dismissing things that are nonintuitive. Sometimes we will be right to dismiss them (and often will be, in corners of "tech"), but sometimes we will miss out. It's even not necessarily a net loss if we invest in understanding a perspective, and then determine its flaws, because we may still have added some new useful ideas to our mental toolkit.

This analysis is spot on. The author is annoyed about the absence of many things that, in most programming languages, would be seen as outright absurd. People have come to realize multi-methods, monkey-patching, and mutable interfaces break down when writing code at scale, across a large team. It's no surprise that Racket has internalized these lessons, and someone in love with Common Lisp would see these as shortcomings. A Haskell programmer could similarly rant about Python's lack of closures, a C++ programmer about JavaScript's lack of templates, and so on. There are many reasons to hate many languages, but "that language isn't this language" is a rather petty one.

Minor nitpick: Python actually has closure as far as I can tell.

But, no worries: Haskell programmers still have plenty to rant about Python. And vice versa.

Er, I rather meant that Python doesn't have a first-class lambda definitional form. You are right that you can get the python runtime to make closures.

You are right that the syntax for making closures is a bit wonky.

You either have to give them a name (with 'def'), or you only get a restricted subset of syntax (with 'lambda').

In practice, giving them a name is not too much of a hassle.

Where are you finding $500k python jobs?

Finance is one of those areas.

Depends on what you are working on. I happen to like Racket a lot but no one has ever paid me money for Racket development. In the last 40 years, lots of paid Common Lisp work.

Racket’s portable GUI and app builder are very good, as is LispWorks’.

> Racket’s portable GUI and app builder are very good,

Ehh.. they're pretty good and I'm thankful for them. I don't think I'd go as far as 'very good' though; the selection of widgets provided is basic. My rule of thumb is that if your application only needs the sort of functionality you can find somewhere in DrRacket, then racket/gui will be pretty good. But if you need a widget you can't find in DrRacket, then you'll probably have to implement that widget yourself.

How do you find CL gigs? What kind of companies/apps use CL? Are they all legacy systems? How does it pay in relation to projects using "mainstream" languages?

HRL Laboratories writes new code in Common Lisp for quantum computing. They have internships and full-time positions.

Google employs Common Lisp programmers as well for their flight search product.

RavenPack, SISCOG, GrammaTech, Grammarly, SRI (?), MIND.AI, and others as well (or at least used to). Check out [1].

[1] https://github.com/azzamsa/awesome-lisp-companies

I have written two Common Lisp books, one for Springer Verlag and one self published- that helps.

Common Lisp is awesome. SBCL is fast, there are a lot of decent libraries (https://awesome-cl.com/), and you can use it immediately in production. Lisp REPL-centric development requires some adjustment, and basically* you don't have other IDE options besides Emacs with Slime or Sly, but Common Lisp was the reason I finally learned how to live with Emacs.

* — there are also Slyblime for Sublime Text and ALIVE for VS Code, but they are very beta-quality.

For someone who's never used Emacs properly, what's a good tutorial for using Emacs together with CL in the synergizing ways that people claim are so amazing?

Honestly, watch some of this series.

Kaveh's Common Lisp Lesson 01 https://www.youtube.com/watch?v=nSJcuOLmkl8

He doesn't take you by the hand to setup an environment, but it's a neat exposure (introduction isn't quite the right word) to someone using CL to do interesting things.

Mind its a bit Mac and Clozure CL specific if you wanted to follow along, but it's still a good watch of just seeing someone doing stuff without explicitly telling you what they're doing. Sometimes you just want to get stuff through osmosis, and he does interesting, basic things with CL at all sorts of levels.

Maybe start with https://lispcookbook.github.io/cl-cookbook/emacs-ide.html The 'magic' though isn't really that emacs dependent; if you like vim (like I do) you can use vim and still get that amazing experience. https://lispcookbook.github.io/cl-cookbook/editor-support.ht... has some details on alternatives. Anyway, the magic or synergy is really a combination of: CL itself being designed to support interactive development, a little server you embed (typically automatically if starting from an editor) in a running CL program called Swank, and a standard set of commands to talk to that server (and the underlying Lisp) and interface with your editor called SLIME (https://slime.common-lisp.dev/doc/html/).

The basics of the commands do things like send code over for compilation and redefining things, running things, responding to errors, debugging things, jumping to definition locations.. things you'd normally expect an IDE to do, except because of CL itself so much of that is part of the language and not a separate IDE. Some more editor-specific things come into play that some people really love, and while they may be best done in emacs they're not unique to emacs either. (Example: suppose you messed up the order of your arguments to a function, so it's written like (my-func (list something) some-other-arg) -- editors have support to swap those two arguments in one go, rather than having to backspace and retype or whatever. I personally don't find such features that magical, and are table-stakes even for things like Eclipse editing Java. Meanwhile even emacs is still weak for some things like large-scale automated semantics-aware refactoring...)

I'm fond of this short 5-part blog series https://malisper.me/debugging-lisp-part-1-recompilation/ (has some emacs gifs) that shows some of these features. But notably again, the experience isn't limited to emacs.

I think I would be better to compare CL with Clojure rather than Racket...since Clojure has found some usage outside academia

I've written quite a lot of Clojure and some Racket, but never really settled down on CL aside from reading PCL and most of PAIP.

In other words, I never coded something sufficiently big to understand the pros of CL.

Hence, I'm curious. What stuff I'm missing out in terms of language features and libraries?

for me the following things stand out

use common lisp (sbcl) if you want c-level performance

use common lisp if you want to make optimizing compilers

use common lisp if you want smalltalk type repl driven interactive development

I'm not a compiler expert by any means, but I would've thought the more functional nature of racket to be a plus for writing compilers. Plus, there's the nanopass framework.

There is no shortage of Lisp compilers.

The new Racket compiler was derived from the Chez Scheme compiler.

I'm curious to know what was lost to the Lisp users in the change on Apple's Newton device going from the early Ralph implementation in Lisp to cplusplus.

On r/lisp there was a rant that included a reflection that things would be done in a third the time. There was an ask for $60M to apply Lisp to know a different world. Apple can easily afford a $180M experiment to see Ralph/Dylan implementation of iOS/macOS. If proved successful would the software platform leap how the hardware has done across multiple architectures? The ARM M series devices are a there and back again move given Apple Newton was on ARM and Apple invested which kept the idea alive.

Or how about all three + Java: https://clojure.org/reference/lisps

A document that puts these differences in context is Rich Hickey's "A History of Clojure"


The design decisions in this paper strongly resonate with me. It helped me to overcome my aversion to JVM-hosted languages, and settle down to learn Clojure.

I'm a research mathematician, at one extreme where code is steps up a mountain. I don't need to summit twice, so productivity is far more important than speed. Nevertheless, the fastest Clojure time trials use GraalVM:



OP questioned Rackets use in industry, not CL.

AFAIK, the only (more known) uses of Racket is as a scripting language for Naughty Dogs PS3-era games, and it was made to make Arc (which runs HN).

I got it the other way around, oops.

Not even exactly the scripting language, more so that they used PLT Scheme (now Racket) for writing tooling, including compiler into various scheme variants they used as internal scripting in games.

I think the parent comment is suggesting that a comparison of CL vs. Clojure is more useful than the current comparison of CL vs. Racket.

I got it the other way around, oops.

The GUI situation in Common Lisp is pretty pitiful. AFAIK the best you get is ECL+QT5 or cffi-cl-gtk (which is GTK 3). I don't see a very favorable comparison for Racket at all in anything else though. When people talk about the benefits of Lisp, they are usually talking about the benefits of /Common/ Lisp. Other languages often misunderstand what makes Common Lisp so great, like in the case of Scheme's overly complex pseudo-macro system.

The best GUI support for Common Lisp one gets from the commercial implementations Allegro CL and LispWorks. Both have cross-platform GUI libraries (ACL for Windows and Gtk, LispWorks for Windows, Gtk, macOS and Motif). Both have their IDE implemented with their respective GUI library. If one would wants to write a substantial GUI application in Common Lisp, that's one of the reasons people pay quite a bit money for those implementations and for commercial support.

There have been very substantial GUI applications implemented with both systems.

I'd argue that the brightest minds in computer science understand what makes lisp great perfectly.

In the lisp tradition, they made a lisp geared towards their use case, specifically. It's called Scheme... which sports much more than 'a pseudo-macro system'.

> I'd argue that the brightest minds in computer science understand what makes lisp great perfectly.

That is your personal opinion. Personally, I consider the design of Common Lisp to be much superior.

> In the lisp tradition, they made a lisp geared towards their use case, specifically. It's called Scheme...

If by "their use case" you mean teaching CS, then I would agree.

> which sports much more than 'a pseudo-macro system'.

Well, Scheme doesn't really have many features. It's meant to be a simple (and by extension limited) language. I mentioned their pseudo-macro system since it's one of the parts where Scheme (badly) disconnects from its Lisp lineage.

> Well, Scheme doesn't really have many features. It's meant to be a simple (and by extension limited) language. I mentioned their pseudo-macro system since it's one of the parts where Scheme (badly) disconnects from its Lisp lineage.

A language does not necessarily become limited, when it is simple. If the right simple concepts are available, all kinds of stuff can be build with them, bootstrapping more complex concepts or features. It becomes rather a question of how much is already there, done by others, or how much work oneself want to put in to have some concept in ones language. So I wouldn't necessarily say, that it becomes a limited language.

Simple languages tend to be very unportable. This has been a plague on Scheme since the beginning, and which the R6RS / R7RS standards have tried to rectify (it's debatable how successful this effort has been). This is one way in which simple languages are limited. (The fact that there are many different Scheme standards certainly doesn't help).

Another part is stripping of useful features to ease implementation. The Scheme language is full of uncomfortable, low-level constructs for the sake of simplicity. In contrast, Common Lisp is far from simple but it contains many high-level constructs which are nonetheless simpler to use.

> Simple languages tend to be very unportable.

On the contrary. A simple language can be relatively quickly implemented on another system, because there is not much to implement. If we look at how many Schemes there are, implemented for various platforms, and compare that to how many Common Lisps there are, we quickly see that difference. Of course CL is also more geared towards being "the one" language, rather than "make your own version of CL", exactly because of the effort required to do so. In that way one could argue it cannot even get close to the portability of a simpler language.

> Another part is stripping of useful features to ease implementation.

With useful features, you mean things, which reduce work of implementing things? Of course a bigger language will have more stuff built-in – That's in the definition of bigger language. The point is though, that Scheme does enable you to build these things. Once there is a library for it, you can use it just like other in-built things in CL. Whether that always happens to happen (a library existing), is another question. There is nothing in the language, that would prevent you from building that thing though.

> The Scheme language is full of uncomfortable, low-level constructs for the sake of simplicity.

I find them very comfortable. I also find its simplicity refreshing and a pleasure to use. It enables me to gain more understanding of concepts, which I might implement or copy from others.

Scheme is geared towards two things: research and teaching. It's a lingua franca for demonstrating implementations of new CS concepts in ways that members of the community understand. Being easy to implement from scratch is a must for both uses cases. Being a less imperative language also makes Scheme closer to the math.

Is it really? I haven't read many papers where new algorithms or techniques were demonstrated using Scheme.

> Being easy to implement from scratch is a must for both uses cases.

If you are teaching how to write a compiler or an interpreter, yes. Otherwise I would have to disagree.

Ok, I shouldn't have said 'algorithm'. Let's restrict that to 'untyped programming language theory'.

There are also TK bindindings https://github.com/herth/ltk

yeah… there are good IUP bindings, and embedding a CL web app into Electron (or Neutralino) is easy, once you have your standalone binary (I'll blog about that). https://github.com/CodyReichert/awesome-cl#gui

Am I missing the part where it says the specific use cases the author is speaking about? It sounds like their arguments are based on ”real-world” development, while my guess is that Racket might fare better when you’re looking strictly at a tool for learning.

At the beginner (high school / undergrad) level all Lisps are more or less the same. The finer points of Lisp-1 vs 2, CLOS, etc. are not comprehensible without a lot of prior work.. and by that point you should optimize for real world practice anyway IMO (something which many university courses do not do).

If we're talking middle school then there are some arguments to be made that Racket's easy-to-use graphics libraries are more conductive to learning. It will never be Flash though.

I am surprised to hear that multiple namespaces with identical symbols is not a source of confusion for learners. IMO, in high school I would've been lost.

Scheme diverges from lisp pretty fast.

Those days I'm really rooting for PicoLisp (https://picolisp.com/wiki/?home)

I don't use either and both languages are on my todo list, but I think I would have already used Racket if I liked the editing experience more - DrRacket is just not going to cut it, if I'm completely honest. CL with Spacemacs is pretty good.

All of that barely matters to me. Where's the transparent parallelization? 2022 is going to be over soon, no language has any excuse to lack an easy way to do message passing and automatic thread pool management like Erlang and Rust's `tokio` do.

There is a good excuse: Not having multi-{m,b,t}illion-dollar corporate entities working on the language full-time is a good reason for something to not exist.

Not having a corporation back the development of a language has tons of downstream effects, including:

- lack of support opportunities

- lack of employment opportunities

- lack of evolution and development of an ecosystem

- higher churn and/or attrition

- fewer opportunity to do long-term design and implementation projects

Yes, sometimes there are FOSS miracle workers who are so extraordinarily productive without being paid, but they're few and far between.

(Aside from that, not all users need such functionality.)

Honestly, there are way too many programming languages already. Even though a chunk of them are interesting as thought experiments and potential future investments (like that one that content-addresses its own AST; forgot the name), I believe it's high time we all focus on more practical aspects.

Parallelism is one of those. Only microcontrollers are single-core these days, and not even all of them.

I sympathize with the people who have fallen in love with their own idea and are sad that others "don't see the light" and don't support them -- but our world is an imperfect system and that's a fact of life.

F.ex. I'd kill for a LISP with strong(er) static typing plus Erlang's transparent parallelism and runtime but yeah, we don't have that. So Elixir and Rust it is for me.

> I'd kill for a LISP with strong(er) static typing

interestingly, the person you are replying to is an author of this little lib


Heh, quite nice. I wish LISP in general standardized something like this.

its not necessary to standardize it. there are enough primitives in the standard. instead the language can be extended through libraries such as this

tillion :D

Lparallel is typical employed for transparent parallelization, thread management etc. https://lparallel.org/cognates/

Wow, I did a double take, you sound like me. When I said something similar praising Haskell's effortless parallelism, someone made a reply that lead to my learning Clojure.

That Haskell has a transparent parallelism is news to me. Does it do the things that Erlang does? Or at least Rust's `tokio` async runtime?

Yes, Haskell has green threads/co-routines.

Racket /also/ has `thread` which is a co-routine but afaik you cannot pass the co-routine off to another native thread and there are certain constraints passing data between different threads, similar to transferring ownership.

I am not aware of the SBCL ecosystem to comment.

Transparent parallelization needs a new generation of languages that are designed to let compiler do parallelization transformations. Like Futhark. Hopefully we can get better integration than in the procedural language side where the parallel languages are completely separate from host app languages and are called via FFI.

Well, I am likely misguided since I never wrote a compiler in my life -- but to me it feels the "purer" a language is, the easier it is to make automated analysis on how to parallelize its code fragments? Hence FP languages are better candidates, I'd think?

It can be a helpful feature together with other language design features supporting parallelism but it's not a silver bullet. Existing old sequential world PLs have proven hard to auto parallelize, FP or not.

A Common Lisp example that leaves out Lispworks and Allegro Common Lisp focusing only on SBCL + emacs, as usual.

SBCL is de-facto most popular implementation of Common Lisp, if I dare say. It is free, it is well maintained, it is easily available, and it is fast. As a student/enthusiast, it doesn't get much better.

Lispworks/Allegro is better if specialized requirements or actual money is on the table. Lispworks supports mobile platforms IIRC which SBCL does not, for ex. But if all one needs is server side deployments or personal projects, there is no need, nor any use of commercial implementations, as good as they might be.

One has to stress: SBCL is probably by far the best Lisp compiler out there. So it is not only free, but also good. Might require a few type annotations to get the best performance though. And the diagnostic messages can sometimes be annoying, but they are right :)

Basically like GCC with VI, or using Clion or Visual Studio kind of difference.

My choice is CCL

The last I heard CCL still didn't support Apple's new range of processors, which sounds bad for its future.

but sbcl is far more common than those two. its also free, unlike those two. i think this is ok as long as its stated cleary (and it is)

I’ve used all of these semi-extensively (slightly less so LW). The environments you get with FL and LW are useful if you aren’t an experienced Emacs user, and each of the pro ones comes with a lot of useful support (and FL’s amazing but expensive gdb), but these days, if all you want to do is get programming done, and don’t need, or don’t want to pay for a highly supported world-class engineering platform, you can do pretty much everything you want at a lot less cost and weight with sbcl, emacs, amd some calling to python. Libraries. If some multi-millionaire wanted to do the world a solid, they’d buy and open source (and pay for continuing support) for FL.

As a lifelong vi user, SLIME is useful even if you aren't an experienced emacs user.

For a while I used SLIME as my debugger and vim as my editor.

I'm inclined to try to learn a Lisp, but the tight integration with Emacs is putting me off. I spent a lot of time and effort to learn vim and I'm wondering why a family of languages should be this dependent on a specific editor...

> I'm wondering why a family of languages should be this dependent on a specific editor...

As mentioned by others, its not. But Emacs is also a Lisp runtime. People who are attracted to Emacs eventually learn some Lisp (Emacs Lisp, closer to CL than Racket et al) or were attracted because of Lisp in the first place. As a result, the overlap between CL devs and Emacs users is naturally higher than that for other editors. This translates into people developing more and better tools for their editor, and higher popularity usually translated into more and better tools (with obvious exceptions).

As mentioned by other commenters, SLIME can be used with Vim too, if that's your thing. But just like SBCL, Emacs is kinda of de-facto editor of choice, not by force, but by love of its users :)

I've done the vast bulk of my CL work using just vi. Specifically, I was doing the typical "reload the lisp file" life cycle, rather than the interactive "just redefine this defun" style of development. Even when I used Lispworks, it was "save file, reload it" style of development.

I used CLISP because it had GNU readline. (e) to edit my file, (l) to reload it, and off to the races (I wrote those trivial defuns). No packages, no imports, no asdf. Everything in CL-USER.

I did switch to emacs for one simple reason: tab to reindent. Other than that it was the most basic of emacs use (load file, save file, ^s to search, cmd-XCV for cut and paste, pgup, pgdn). No Sexpr navigation, no buffer juggling, none of that, no lisp integration whatsoever outside of paren matching and the tab to indent.

Aw, dude, if you’re not buffer juggling - esp with three or four shells! - you’re missing out on one of the great joys/mysteries of life!

Besides that I can only recommend Slime, you can basically convert any program that doesn't use readline into one via rlwrap. Very useful :)

> why a family of languages should be this dependent on a specific editor...

The language isn't, but SLIME is because emacs isn't just an editor, it's a platform and system that people develop applications on top of. And the community was already there, using emacs, to continue with that option.

It’s not. You can use any editor. Emacs just happens to have a package called slime that helps - I’m told. I’ve been using just lisp+emacs w/o slime for decades. To be clear, except for slime, which as above is optional, any editor will do just fine.

SLIME is a third–party tool that integrates with Common Lisp; it isn’t part of the language. With most Common Lisp implementations you can just use the repl to call (inspect foo) to see what slots foo has and their values. It will probably be a little clunky since drilling down into the details might require you to type numbers at a prompt. Meanwhile SLIME presents the same information in a UI, so you can click on things (or type the numbers if you prefer).

It just happens that SLIME is a pretty well designed UI (the acronym stands for Superior Lisp Interaction Mode for Emacs, so I guess that was the goal). It’s miles ahead of what you get from most other languages, even C++, but most of what makes it so great comes from features that were already present in the language implementations. It also combines very well with other advanced Emacs packages such as Paredit.

Interestingly, SLIME is implemented as a network protocol to allow for remote debugging, so you can implement a new UI for it any way you want. I’ve never used it, but you might check out SLIMV instead. https://www.vim.org/scripts/script.php?script_id=2531

Join me vim brother and don't settle for forcing yourself to use emacs while developing in CL when you don't have to! You even have two vim options! https://github.com/kovisoft/slimv and https://github.com/vlime/vlime with a great comparison of the two: https://susam.net/blog/lisp-in-vim.html

You can emulate vim key bindings in Emacs with ViperMode.

No you can't. evil-mode gets pretty close though.

Vim isn't just keybindings.

How do you use SLIME in Vim? Isn't SLIME written in Emacs Lisp which runs only in Emacs? Honest question how you could run an Emacs package in Vim?

SLIME is, but there is Slimv which is a SWANK client for vim:


(Never used it myself so can't comment on it beyond noting its existence.)

I have an article I wrote about how I did this, but may have lost it to time.

The short version is I have SLIME running inside Emacs, and my text files open in vim. I would just reload the changed file(s) when needed (originally with cl:load-file, then later with asdf, which is roughly "make" for lisp). I used slime similarly to how I would use gdb. It was a big improvement over using vim with the clisp[1] REPL both because SLIME was more powerful than the clisp repl, and because it worked with other implementations like CMU CL (which sbcl now is a fork of).

I mostly just used the menus for interacting with emacs, but since the shortcuts are listed beside commands, I did learn the more commonly used ones.

I continued like this for years until I discoveredevil-mode[2], which was the first thing close enough to vim to be usable for me (and even then I had to add a few keybindings that it was missing).

1: clisp is a specific implementation of common lisp, not an abbreviation

2: https://github.com/emacs-evil/evil

To provide a bit more context, most of SLIME is just Common Lisp code (https://github.com/slime/slime), with a bunch of Emacs Lisp code alongside to support interfacing with Emacs. But you don't need that Emacs Lisp code to take advantage of almost all of the functionality SLIME provides. For instance, if you want to know who-calls a function, there's some command in emacs to do it, but all that command is doing is just a bit of elisp code which sends a message to Swank (a server running inside Common Lisp) and Swank invokes some native CL code to figure that out and return the results, then finally a bit of elisp code presents the results in some way. Vim can do the same thing just fine with vimscript/python (what the Slimv plugin uses) or otherwise, the bulk of the work in figuring out the list of callers of some function is done by the CL code (and CL implementation itself).

With this: https://github.com/jpalardy/vim-slime

It doesn't do everything, it doesn't do it as elegantly, but it works

This does not look like SLIME. It seems to be just copying the text from one text buffer and pasting it to another Vim buffer which is running a REPL. That is not what SLIME is really! I can't even call this a poor imitation of SLIME because it is not SLIME in any sense of what SLIME is. There is no Swank server this connects to. There is no understanding of Lisp's s-expressions. It would happily copy any random text into any random REPL and call it job done! It is weird that they have chosen the name "SLIME" for vim-slime project when it is not SLIME in any way.

You don't. You run it in emacs. That's what they are saying.

You make it sound like SBCL is an also ran. SBCL powers Google flights.

Def didn’t mean that. I use sbcl+emacs all the time. It’s my go to platform. I’ve used FL and LW but don’t need them. SBCL+emacs is all you really need. (I started lisping before slime so never really got into it. If it’s half as good as folks claim then maybe square all of the above. But someone else will have to chime in on that.)

What Common Lisp implementation is FL ?

Sorry. Allegro used to be called Franz Lisp. ACL would be a more correct usage.

Although Franz Inc supported Franz Lisp and developed Allegro Common Lisp, it is my impression they are distinct implementations. I can't find any hard evidence one way or another. Do you know?

A think FL just morphed into ACL by renaming, and probably some code changes, but not reimplementation, but not having been there, I’m not 100% sure. Since then, of course, it’s essentially a different implementation bvo many revs.

They say Allegro CL was a new implementation written from scratch:


what calls to python libraries are you referring to?

i think it would be nice for someone to do a comparison of development in emacs+slime vs allegro cl vs lisp works

Unfortunately I’ve rolled my own ugly python ffi just to use python libraries. It’s slow as frozen honey bcs it gets to the great python libraries by slogging through python itself to do the data (de)muxing. That millionaire I mentioned should figure out how to get sbcl to call to python packages directly.

And so the knowledge of a poor development experience for Common Lisp keeps spreading.

Please don't take HN threads into internecine PL flamewar. It's not in the interests of the thread, the HN community, or the Lisp community.

On HN, we've always particularly wanted to avoid the nasty tribal fights that have afflicted CL discussions in the past.

It is not a flamewar, rather being deprived of the knowledge of what a full blown Common Lisp experience used to be like.

Yes but we need you to make those points substantively, neutrally, interestingly—without swipes or putdowns: https://news.ycombinator.com/newsguidelines.html. If you just post grumpy one-liners we're going to get a flamewar, and the primary fault for that lies with the comment that starts it.

I know from many past comments that you have a ton of knowledge to share—it would be way better if you would explain some of that to the rest of us, rather than holding it back and just giving us a shallow version of your final conclusion, with a pointed end.

This is, like some other of your comments, very condescending. I won't speculate on what you hope to achieve by these remarks, just comment that I don't think this kind of condescension has done Common Lisp much good over its history.

Oh well, I already know how Common Lisp environments of yore used to be like.

If others don't care to learn about them so be it.

Who said poor development environment. Emacs is a terrific lisp editor and straight sbcl (or any real lisp) is a better dev env than almost any other straight pl env.

Compared to how Lisp Machines used to like, yes it is poorer.

Commercial CL environments are the surviving ones that still offer a similar experience.

Yeah, but that’s just complaining that just because we used to have flying cars we still should. We decided as a society to give up on flying cars. They’re gone. Move on with your (our) boring 2D lives.

I don’t think it’s fair to say that we’ve given up on Lisp machines because we chose to do so. Most people never got to experience them to begin with! Certainly not enough people to build as thorough a software ecosystem as we expect nowadays.

i think you mean sbcl+emacs+slime/sly. straight sbcl is pretty limited

To be clear, slime isn’t necessary.

Care to pay the author for an Allegro CL and/or LispWorks license so they can broaden their article?

Community licenses are available, I don't need to pay anything.

And severely limited at the same time, to the point where people, IMO understandably, prefer to use 100% free alternatives rather than write about the subpar experience available with "community licenses". Free versions of both LW and ACL have major hurdles that are getting in the way of developing software on these implementations.

That subpar experience is still better.

Perhaps you can write some articles using them, showing how they're better?

Not if your lisp image exits even with simple programs.

Lets assume the author shared his experience, rather then a comparison of features. Experience gained by actual usage. Those community licenses are severely restricted, to the point of being unattractive (to me and I suspect many others). I cannot be bothered to sign up and install crippleware when productive tools are available for free. LW might be best thing since sliced bread, but I'm afraid, I'll never know for sure.

And yet those community features offer a better graphical development experience, InteliJ of Lisp, so to speak.

They offer a more graphical development experience. Yes. Whether it is better than Slime, would need to be discussed. But the point is quite irrelevant in the moment, when you just cannot develop anything due to memory limitations.

I would think that one of their advantages is that they have GUIs more similar to what the underlying platform looks and feels. This is easier to learn and easier to use, compared to tools based on GNU Emacs, which is basically its own platform. There are a lot user interfaces in GNU Emacs which are not very user friendly and not so good looking.

Yeah, the Emacs UI is definitely a bit lacking. It could be greatly improved if they embraced Gtk a bit more. Especially if some widgets could be handled from elisp. But what it lacks in polish, it often makes up with usability, once you are a more advanced user. And of course, the sky is the limit for customizability.

I haven't used the recent versions of ACL and LW, so I cannot make a direct comparison, but for the experienced user, Emacs offers a lot of productivity.

If you ask for commitment before the pupil turns into a master, they will go elsewhere.

It escapes me, what this has to do with my comment. That was directed at the Emacs development in general. It is based on Gtk, but doesn't fully embrace it, still limiting itself to mostly a terminal emulation.

Wasn't the price of these implementations one of the factors for the (symbolic) AI winter?

I would say it's just karma.

These implementations survived the AI Winter, despite their high prices.

A lot of the cheaper commercial implementations did not survive or did not thrive: Corman Lisp (now open source, but not used much), Golden Common Lisp, Exper Common Lisp, Procycon Common Lisp (got bought by Franz), MuLisp, Macintosh Common Lisp, ...

Isn't MCL now Clozure CL?

MCL was a commercial product (for a time even owned, developed by and sold by Apple directly) with an employed team working on it. Apple had a bunch of projects and tools using it. MCL had a manual, a GUI, support, etc.

Clozure CL is the result of a project extracting/porting the bare CL implementation from MCL. It was open sourced. It got a GUI later, but never again had the polish and appeal of a product like Macintosh Common Lisp.

Not at all. Both of those projects started inside the second AI winter. They can even be seen as an attempt to save Lisp after the market collapse for Lisp-specialized hardware. That said tooling for other contemporary languages was high as well back then. Won't be surprised if relatively to the total costs for them LispWorks and ACL were cheaper.

I just can't get past referring Scheme as "schemes". Forcing myself to read the rest of the article. Am I getting punk'ed?

  I prefer _schemes_ over _Common_Lisps_, and I prefer Racket of the Schemes. 
The author is just using the plural to acknowledge there are multiple variants of the scheme and common list languages and environments.

This is not uncommon.

> I just can't get past referring Scheme as "schemes".

That isn't something that happens in the article. Compare this sentence:

>> Now if you still want to use Scheme, Racket blows the other Schemes out of the water with what you get out of the box. […] But it is not without its problems.

> Am I getting punk'ed?

Only by yourself.

> A better developer experience. Emacs + Slime is way better development and debugging experience than either Emacs + Geiser, or Emacs + Racket-Mode or Drracket.

I've thought about learning a Lisp in the past, but the Lisp community's insistence on using Emacs has completely put me off of it. I will not use Emacs under any circumstances, and you couldn't pay me enough money to change. And this had lead me to conclude that Lisp is simply not for me.

You can always use a commercial Lisp vendor's solution or vim + slimv for CL or Dr. Racket for Racket. There are other comments in this thread about the vim + slimv alternative to emacs + slime (I don't use it myself since I learned emacs as a freshman in college and stuck with it, so check out their comments to get an idea of how its done).

I don't think anyone insists that you use Emacs?

Anyway, not sure why anyone but you should care about you learning something?

Everyone have to deal with their own psychological barriers somehow.

Is it because you are used to vim bindings? Emacs offers an "evil mode" that implements modal bindings

I have multiple reasons for not wanting to use Emacs. I detest every facet of the UI, among other things.

Also I don't care about Vim bindings. I use Kate for writing code about 75% of the time anyway and would prefer to continue doing so.

What do you use the other 25% of the time? Anything in https://lispcookbook.github.io/cl-cookbook/editor-support.ht... ? I've never used Kate before now, but I gave it a spin. Seems you could use it for some primitive Lisping (certainly to just teach yourself the basics, or make edits to existing programs), but I wouldn't recommend it long-term. A big win with Lisp is in interactive development, so you want an editor that supports something like Slime -- particularly the ability to have a running program and interactively run code, which might define new functions or redefine old ones or change some state, whatever, and have such changes immediately take effect.

I suspect a plugin for Kate could be made, but I didn't see one after a very brief search. The maintainers probably want to encourage more use of LSP anyway. There is an LSP server for Common Lisp, but it's pretty basic, and is still at odds with how you want to do things. (Perhaps Clojure has a more developed one.) It at least "works" with Kate in the sense that Kate successfully launches and talks to it after making a custom settings.json, and gives me simple symbol completion and docstrings on hover. But to learn about my own functions and their docstrings, I need to be able to load my code into the Lisp instance that's also running the LSP server. (Traditionally IDE-ish features in Lisp aren't done by external tools statically scanning files, but by introspectively asking the Lisp program itself for the info, which may include file info.) I did hack something together so that I was able to copy-paste some code from Kate to the split terminal panel running a second instance of Lisp that talked to the one Kate launched, but this isn't very productive. You could maybe get by with writing some code, saving, and then reloading the whole set of files (system) with one command when you want, but you'd need even more work to make errors interactively recoverable, which is another really nice feature of Lisp development when you have proper editor support.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact