Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Production Lisp in 2020?
222 points by dhab on May 19, 2020 | hide | past | favorite | 216 comments
I have gone through initial chapters of "Practical Common Lisp"[1] and "Loving Common Lisp" [2], and have a bit of intuition on lisp, and the power of macros. Haven't done any projects yet, but was also researching on real issues faced by adopting lisp, and ran into articles about abandoning lisp [3], adopting lisp [4],[5]. Could not find anything more recent; but what they mention in these articles - even the ones that are adopting lisp talk about issues like:

* poor ecosystem of libraries - few gems, most other half-baked

* poor community coordination

* Dependency management limitations with quicklisp

And some specific red flags like:

* poor support for json[6] * poor support for async

* have to restart the server every 20 days because of some memory leak [3]

* hack to tune GC [5]

If you are using lisp in production for non-trivial cases, do these issues still exist? is there a way you can quantify effort is resolving them, and if yes, what is it? and, finally, if you had to re-do your project, would you chose lisp or something else?

[1]: http://www.gigamonkeys.com/book/

[2]: https://leanpub.com/lovinglisp/read#quicklisp

[3]: https://lisp-journey.gitlab.io/blog/why-turtl-switched-from-...

[4]: https://lisp-journey.gitlab.io/blog/why-deftask-chose-common...

[5]: https://www.grammarly.com/blog/engineering/running-lisp-in-p...

[6]: https://stevelosh.com/blog/2018/08/a-road-to-common-lisp/#s4...




Yep, I've used Common Lisp in production. Several times. I'm in the middle of writing a new app with Lispworks right now.

Common Lisp is generally the first thing I think of for any new work I undertake. There are exceptions, but usually my first choice is Common Lisp. It has been for a little over thirty years.

I've often worked in other languages--usually because someone wants to pay me to do so. I usually pick Common Lisp, though, when the choice of tools is up to me, unless some specific requirement dictates otherwise.

The objections you list might be an issue to some extent, but not much of one. Certainly not enough to discourage me from using Common Lisp in a pretty wide variety of projects. I've used it for native desktop apps, for web apps, for system programming, for text editors, for interpreters and compilers. When I worked on an experimental OS for Apple, the compiler and runtime system I used were built in Common Lisp.

I'll use something else when someone pays me enough to do it. I'll use something else because it's clearly the best fit for some specific set of requirements. I'll use it when there isn't a suitable Common Lisp implementation for what I want to do. I'll even use something else just because I want to get to know it better.

I've taken various detours along the way into learning and using various other languages. I like several of them quite a bit. I just don't like them as much as Common Lisp.

The pleasure I take in my work is a significant factor in my productivity. Choosing tools that offer me less joy is a cost I prefer not to bear without good reason. That cost often exceeds the advantage I might realize from using some other language. Not always; but often.

There was once a language I liked even better than Common Lisp. Apple initially called it 'Ralph'. It evolved into Dylan, which, in its present form, I don't like as much as Common Lisp. If I or someone else invested the considerable time and effort needed to write a modern version of Ralph, then I might choose it over Common Lisp.

For now, though, I'll stick with the old favorite. It continues to deliver the goods.


Care to elaborate on how Ralph was different from cl/dylan? Or at least point me to some resources to read more about it? I don’t seem to find anything specific on the internet. Thanks


When Ralph was new, it was essentially a hybrid of Common Lisp and Scheme. It was very much a Lisp--not just technically, or in matters of surface syntax, but down in its bones.

The surface language was basically Scheme, with some special forms removed and others added or renamed. The type system was basically a simplified Common Lisp Object System; all values were instances of CLOS classes.

The compiler, runtime, and development environment were built on Macintosh Common Lisp. Ralph inherited MCL's design and aesthetics.

Like Common Lisp, it was designed for writing software by interacting with it and modifying it as it ran. That's the essential point.

The project I worked on, bauhaus, was running basically all the time we worked on it. We worked on it by interrogating and hot-modifying the running system, in the time-honored tradition of old-fashioned Lisp and Smalltalk systems.

Later Dylan was not like that. It evolved into just another batch-compiled application language. You edit some files. You batch-compile them. You run the resulting artifact for testing. Rinse and repeat.

That's how OpenDylan is today. That's why it lost me.

It's also what's wrong with some newer Lisps, from my point of view. They aren't designed for building programs by modifying them from inside while they run. They don't know how to handle existing data in the presence of redefined types. They don't provide interactive debuggers and inspectors that can be used to rummage through the memory of the running program, edit it while it runs, and conitnue execution.

Common Lisp provides these features, and has done for decades. Ralph provided them. Smalltalk does, too. If all the Common Lisp implementations in the world suddenly evaporated, I could get along fine with Smalltalk, after a brief period of adjustment. It's probably purely an accident of history that I'm a Lisper rather than a Smalltalker, anyway.

But Dylan doesn't have those essential features anymore, and nothing that lacks them is likely to win me over--not OpenDylan, not Julia, not Clojure or Haskell, or F#. I like all of those languages, but not nearly as much as I like my livecoding.


Strongly agree with that viewpoint and the central paradigm it describes, it's what's made my 15 year investment into Lisp more than worth it. It's also what makes every newly emergent language not measure up to Lisp: None of them focus on interactive development. In fact, the only one I'm aware of that (partly) did that was Slava Pestov's Factor but ultimately, it was more a proof-of-concept than anything that could be considered practical and is now languishing.

Later, Slava went to Apple to work on Swift, which turned out to be another mediocrity, lacking any sort of originality / vision and having a firm expiration date. There used to be a time when programming languages focused primarily on end-user empowerment and paradigm-centered computing was the norm (e.g. Seymour Papert's groundbreaking research in constructionism and incremental learning / development). There is precious little left of that now.


Hmmmm...PharoSmalltalk, 8th (same rough family as Factor), Mathematica, Dyalog APL, J, K...lots of tools that are fully interactive in their bones.


Pharo and other Smalltalks are definitely interactive in the sense that I mean. Factor, too, as far as I know, and I'd add FORTH, though Lisp and Smalltalk systems tend to have more high-level conveniences than FORTH systems I've worked with.

I don't know whether the others you mention are or not.

A reasonable litmus test is: define some data structure and make a bunch of instances of it. Now redefine the data structure. What happens to the existing instances? Does the runtime automatically notice that the structure definition has changed and offer you a way to update the existing instance to comply with the new definitions?

Another is to write a function that intentionally contains a call to some other nonexistent function, then call it. What happens when you hit the undefined function? Are you offered an interactive session that you can use to grovel around in the dynamic environment of the suspended function call? Can you see all the parameters and locals on the stack? Can you edit their values? Redefine types and pending functions? Can you supply the missing function definition, resume the suspended function call, and reasonably expect it to return a value as if the function in question had been defined all along?

If the answers are yes, then the language runtime is interactive "in the bones", in the sense I mean. If not, then not so much.


I wouldn't characterize Pharo/Mathematica/Dyalog APL as newly emergent, they've been around for a long time. I can't say anything about J and K since I've never used them but my superficial impression is that they are focused on a specific niche, not as general purpose as most Lisps. They've also been around for a long time.

8th I have actually looked at and it's both _not_ particularly interactive (certainly nowhere near as interactive as Factor) and has a very strange license [1] to boot which makes it a total nop for me. It has been criticized before [2].

[1] https://8th-dev.com/com-license.html

[2] https://news.ycombinator.com/item?id=15672361


New is pretty relative, but yeah they've definitely been around for a while. I'd say the array languages (APL/J/K) can be used in far more areas than most people think, but I understand your point and am on the fence myself.

What is wrong with 8th's license? It is basically a Forth, so extremely interactive...mostly just messing with a stack...not sure about metaprogramming. I think a production license costs you $200 as a one time fee and you have to pay no royalties or anything for apps (better than many tools out there). Upgrading to a later version comes at a discount too. Most of the runtime is proprietary/closed-source if that is what you were saying (def understandable if you had a problem with that). Also, being a 1-developer project, progress is rapid, but I'd be worried about what happens if the maintainer wins the lottery.


The second link I included goes over some of the issues but regarding interactivity, just look at the videos its author produced to show it off. He's writing code in either VIM or the shell. That's a long way from Factor's image-based interactivity and a long long way from Smalltalk and Lisp Machine-paradigm Lisps.

Any language with a REPL can be described as "interactive" but that's not what having a strong focus on interactive development means. As Mikel Evins wrote, it's really about the entirety of a language being geared around interactive development.


It's "mikel evins".

I can't blame you for parsing it the other way; it's a more likely spelling. You can blame my eccentric father for the odd spelling of my first name, but he inherited the surname fair and square.

It's pronounced exactly the same way as "Michael Evans."


I'm always grateful when the map-territory distinction comes along and slaps me in the face. It's a signal that I should re-calibrate.

It's been a while, thanks for making it happen!


I actually take that as a positive item. He's able to effectively write code using just the terminal and vim. It doesn't get simpler than that. I really hate having to use something like Visual Studio or even VSCode. They're bloated and use a ton of resources. Playing around in Pharo is fun for a bit, but same problem.

I 100% agree that it seems like you can't unwind the stack the same way you can in Common Lisp or Smalltalk though.


Pharo is moving down strange pathways and some questionable recent implementation choices made me lose my faith (for now). Squeak has always been and remains my Smalltalk of choice, and if you can look past some -probable- superficial ugliness (e.g. Morphic) it remains relevant and extremely iconoclastic.


It seems like Pharo has all the action going on. Is Squeak still maintained.


I don't remember ever hearing of Ralph when Dylan appeared, though that was long ago. Was it talked about externally?


The name "Ralph" wasn't really secret anymore once Dylan went public, but there also wasn't any particular reason to talk about it. The name of the language was Dylan at that point.

Later, as Dylan evolved away from its roots, it made sense to resurrect the old code name to distinguish the Lisp that we used to write a Newton OS from the later non-Lisp that it gradually evolved into.

People did know about it, though. One of the links in a sibling comment is to turbolent's little Lisp that compiles to Javascript, named Ralph in homage to the language I used every day back in the early nineties.


Probably discussed before, and I’m happy to read about it if you have pointers to it, but how does livecoding work with multiple ppl/source control? Is there a persistent textual format where all changes to the image are stored? How about releases?


Old-fashioned Lisp and Smalltalk systems (and some old Scheme systems) can save the live memory image to a file. You can load that file at launch time to reproduce the dynamic state of the system at the time the image was saved. You can share saved dynamic state between colleagues that way. Pretty handy, especially when you want someone to be able to debug a hard-to-reproduce bug that just showed p on your screen.

We used to keep known good images to work from. Generally a pristine release image, and a reference image with current project code preloaded, and then one or more working images with programmer-specific or task-specific code loaded. I used to save a working image at the end of the day and launch it again the next morning. MCL would remember my state right down to the positions and contents of my windows, so it was a quick and easy way to drop right back into the state of mind I was in at the end of the previous day.

For source code, we used revision control, exactly as you'd expect. In the Ralph days it was Apple's Projector. Nowadays most folks use git.

It's not super convenient to use revision control for dumped images because they're moderately large binaries. Source-control systems are mostly not great with those. So I keep reference images around, treating them like any other valuable binary asset--the difference being that rebuilding them from scratch is pretty fast and easy with today's fast machines.

My standard operating procedure is to start with a single scratch file that constructs a sketch of my naive idea of the data I'm going to be working with, load it, and start poking it to find out where I'm wrong.

As I begin to know what I'm doing, exploratory expressions turn into definitions of functions and data structures. I refactor them out into proper source files and add references to them into a system loader (like everybody else, I use ASDF for that).

Occasionally I might screw up the dynamic state by doing something dumb. Then I kill the session and restart it. The Lisps I use are generally back up and ready for me to continue in less than a second.

When I've made enough progress to want to build a release that I can distribute to testers, I write a builder file. The exact details of that depend on which Lisp I'm using; Lispworks, SBCL, and CCL each have their own idiosyncratic ways of going about it, but a simple build is generally something like a half dozen lines, and a complex one probably isn't much more than a page or so.

The builder file evolves along with the rest of the project.

The same general process applies to unit and integration tests. In fact, I'm pretty much always testing--testing my thoughts and testing what I just wrote is probably the majority of my activity most days. All I have to do to create formal unit and integration tests is make a testing scaffold and then harvest expressions from my scratch files and test comments to flesh it out.

When I release, I tag a commit in the source repo. I might also make a reference image to keep around locally, in case I want to test something against it or hand it to a collaborator.

I work in files a lot, which is typical of Lispers. Smalltalkers are a little more oriented to living in the image. For the most part, files are a serialization format that you use to exchange code and other resources with other sites. Locally, everything lives in your image. Modern Smalltalk systems have their own built-in interfaces to revision control, either to git or, in Squeak or Pharo, to a Smalltalk-specific revision-control system called Monticello.


(Hope this isn't too disruptive to the conversation, but I just wanted to thank you for the time you're taking in sharing these details with us. All very interesting.)


There were tools for Smalltalk systems that did version control on the level of individual objects in networked way. Some of that drives present synchronization efforts with Git in Pharo, generally the name to search archives for is "Envy".


hi Mikel, do you know if there is some Ralph and Bauhaus info/docs/sources out there? Or they are completely lost in time?


They're not completely gone, but they're hard to find.

This book describes an early version of Dylan that was very close to Ralph:

  https://www.amazon.com/Dylan-Object-Oriented-Dynamic-Language/dp/B000KGC0HA
It wasn't exactly the same. Ralph had some extensions that weren't part of the language described in the book. For example, it had a class for representing blocks of memory allocated outside the Dylan heap (that is, "foreign" memory).

It used to be possible to find a copy of the book online, but I didn't turn up anything with a quick search.

I think Bauhaus sources still exist somewhere, but I don't know how you'd get them. I asked Apple about releasing them some years ago and they ignored the question.

A few years later I asked whether they were still in Apple's software library, and the person I asked--who, I would have thought, should know about it--didn't seem to have any idea what I was talking about. Maybe they ditched the software library.

Once upon a time there were videos of some of the demos that various folks did of pre-release Newton features. Maybe some of those still exist somewhere; I don't know.


I just noticed that a sibling comment has a link to Rainer Joswig's archive of the Dylan book (Dylan, an Object Oriented Dynamic Language, the book I referred to above).

Here's the link again for reference in context:

http://lispm.de/docs/prefix-dylan/book.annotated/

I should have known Rainer would have it.

Even better: he's also archived the design notes:

http://lispm.de/docs/prefix-dylan/design-notes/


Ralph was the initial name of the project, from the film the invisible man. More about dylan history:(1)

Ralph used prefix notation, dylan infix notation. Here you can find an implementation of ralph:(2)

From (2): Ralph is a Lisp-1 dialect that compiles to JavaScript. It is heavily inspired by an early version of Dylan (also known as "Prefix Dylan"), as described in Dylan – An object-oriented dynamic language.

Also (3) is about a Lisp to Dylan translator. There is also norvig lisp to dylan translator cited in (3).

(1) https://en.wikipedia.org/wiki/History_of_the_Dylan_programmi...

(2) https://github.com/turbolent/ralph

(3) https://tim.pritlove.org/2003/11/10/converting-lisp-to-dylan...


Just a nit: the film "The Invisible Man" is unrelated, based on a novel by H. G. Wells about a scientist who drives himself mad with a technique that renders him invisible.

Ralph was named for Ralph Ellison, author of a different, later book Invisible Man, about being a black man in America.


You are right, wikipedia: According to Apple Confidential by Owen W. Linzmayer, the original code name for the Dylan project was Ralph, for Ralph Ellison, author of the novel Invisible Man, to reflect its status as a secret research project.


Dylan originally used Lisp syntax, as documented in the Dylan Manual; there was angst when it changed. I don't have that with me, but it's online: http://lispm.de/docs/prefix-dylan/book.annotated/


I am also working on an app using LispWorks [1]. If you want, please contact me and we can jump on a call and share experiences.

Off topic, but many years ago I had lunch with Larry Tesler (John Koza was also there) and Larry knew of my CL book from the 1980s and pitched me to rewrite it in Dylan. That might have changed my career trajectory.

[1] http://knowledgegraphnavigator.com/


I don't generally like synchronous calls, but for you I might make an exception. Also feel free to drop me a line at mikel@evins.net.

I liked and admired Larry. I considered him a friend. Once I sat with Steve Jobs in my office at NeXT and argued about Larry's virtues. I took the pro-Larry position.

Your lunch was probably around the time I was working on that Lisp OS at Apple. That was Larry's idea, too.

If you don't hear from me, it's not because I don't want to talk with you; it's because I'm forgetful (one reason I like email better than synchronous calls. Inboxes remember things that I forget.)

Knowledge Graph Navigator sort of intersects with some of my running interests--particularly knowledge representation and frame stores.


I never know about this, and it turns out there have been quite a few rich comments about it over the years—all, unless I've missed one, written by you. Here's a link if anyone wants to look back:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


Thanks; I looked back myself. Boy, I like to talk about Ralph, huh?

To be expected, I suppose. It really was my favorite working language ever.


Please keep talking! Your comments are superb.


Yeah this is cool. I did not know live coding was a thing. I'd like to see a video of that to compare with autobuild workflows that watch files for modification times.


There's this recent video showing a little of what the Pharo Smalltalk environment is like:

https://www.youtube.com/watch?v=HOuZyOKa91o

The video was made by a Ruby programmer, rather than an experience Smalltalker, so it's more a newbie showing you cool things he's discovered, rather than an expert showing the power of the system.

On the other hand, there's this one:

https://www.youtube.com/watch?v=o4-YnLpLgtk

That one is an expert, Kalman Reti, showing some of what it's like to work with a Symbolics Lisp Machine. (It's not a real Lisp machine; they're all at least thirty years old now; this is an emulator running the old Symbolics OS on Linux).

Go to YouTube and search for Rainer Joswig and you'll find a few video demos of Lisp Machines and MCL and so forth.

Finally, here are a few images from about eighteen years ago, also from Rainer. They show various views of the Macintosh Common Lisp and SK8 environments. It's not an experienced user showing you around, but at least you can get some idea what the environments looked like:

http://www.lemonodor.com/archives/000028.html


Do you pretty much exclusively use LispWorks, or do you other tooling. I am curious to hear.


No, I use all sorts of things, depending on what I'm trying to accomplish.

I'd probably use SBCL or CCL for everything, if Lispwork' CAPI or something equivalent worked on them. I do use them when I don't need to build a GUI that I can easily port among platforms.

When I wrote a WYSIWIG text editor years ago, I did it with CCL because it was for my own use, and I didn't need it to run on platforms other than macOS. If I wrote a new version of it, I'd try to find a way to make it work across Windows and Linux, as well. (I can't use Lispworks for that because it's violates their license restrictions on distributing something that can be used as a Lisp development environment.)

I use other languages when I'm paid to, or when I want to do something that is a lot easier to do in another language. For example, I recently wrote a prototype of a web-based collaborative editor and publishing platform for shared documents. I chose Couchbase for storage, because it solves the sync problem reasonably well, and I used Clojure and Clojurescript with React and Reframe, because it worked well with the Couchbase Java SDK.

I've been paid to build prototypes, apps, tools, and system software with Lisp, Icon, ML, C, C++, Objective-C, Java, Python, Swift, Prolog, Haskell, Scheme, and other things. If I had a modern version of Ralph, I'd most likely use that. Failing that, Common Lisp is what I like best.


I think Reddit's decision to move from Common Lisp to Python is interesting and their reasoning (ecosystem) is still valid 15 years later.

The historical timeline is especially interesting because Reddit's cofounders Steve Huffman & Aaron Swartz were alumni of Paul Graham's first YC batch and PG is the author of the well-known Lisp essay "Beating the Averages":

- 2001-04 Paul Graham : Lisp essay "Beating the Averages" [1]

- 2005-07-26 Paul Graham : https://groups.google.com/forum/#!topic/comp.lang.lisp/vJmLV...

- 2005-12-05 Steve Huffman : https://redditblog.com/2005/12/05/on-lisp/

- 2005-12-06 Aaron Schwartz : http://www.aaronsw.com/weblog/rewritingreddit

The takeaway from the Reddit case study is this: Yes Lisp macros and language malleability gives it superpowers that other languages don't have (the "Blub Paradox") -- but other languages also have other superpowers (ecosystem/libraries) that can cancel out the power of Lisp macros.

You have to analyze your personal project to predict if the ecosystem matters more than Lisp's elegant syntax powers.

[1] http://www.paulgraham.com/avg.html


Or the simpler reason is that they just knew LISP better than other languages and chose to use it.

Once they needed to grow by adding developers, they decided to rewrite into a language that other people already knew.

The only "Super Power" is the complete understanding of what your endusers want so that you can grow.


When you are writing a tiny protoype 15 years ago, any language you know is fine. But over time -- both the life of the product, and the world overall, software grows far more complex and the ecosystem matters much more.


Knowledgeable developers are part of every language's ecosystem.


Yes but ease of attracting developers is harder in some languages versus others.

No need to include the "knowledgeable" modifier.


Interestingly I have encountered voices that using "elitist" language like Common Lisp, OCaml or Haskell combined with widening recruitment pool (by going globally distributed) actually increased some companies abilities to hire experienced developers - because it filtered popular languages that suffered from higher number of people who had higher opinion of their skills than warranted - or who just wanted to "fake it till you make it", especially after boot camps became popular.


When Alexis Ohanian gave a talk at Google in 2013 I got to talk with him about he and Steve dropping Lisp for Python. He said the Lisp version just kept falling down.

In production using SBCL on a customer project we had good results hiring a SBCL maintainer/developer to work past a few issues. From personal experience, I can assure you that LispWorks and Franz both provide incredibly good support for their customers.

That said, if I had to develop Reddit right now, I would not use Common Lisp either.


I’ve looked a bit at the original reddit code (it is/was published on GitHub) it does a lot of things in unusual ways for a lisp project and I’m not too surprised that they abandoned it: I’m pretty sure that if they had rewritten it in lisp, taking advantage of what they learned, the second version would have worked as well as the Python rewrite.

Also, quicklisp didn’t exist/wasn’t as widely used then: I’ve done all sorts of Lisp projects now and never had serious issues finding good libraries for standard web development tasks.


Didn't they also had to deal with wanting to use threaded architecture while simultaneously deciding to run on less supported platform (FreeBSD)?


What would you use instead if you had to choose a Lisp based language today?

Clojure?

Or a non-lisp language instead?


> Blub Paradox") -- but other languages also have other superpowers (ecosystem/libraries)

Indeed. The "blub paradox" is profoundly arrogant - it presumes not only that people are too stupid to see the advantages of Lisp, but that Lisp advocates are smart enough to know that other languages have nothing unique to offer that Lisp doesn't.


PG calls it a Paradox because no one (not even the Lisper) can know whether their language is Blub.

You are right, though, that the Lisp community read the essay and internalized that all other languages are Blub. In my opinion this led them to collectively dismiss, for far too long, Dictionary Literals in every dynamic language which has eaten Lisp's lunch in the marketplace in the last two decades. Objections were raised ("you need homoiconicity for macros!") and everyone assumed they were valid and stuck with Common Lisp's woeful hash table interface and the loop macro keywords, insisting all the while "oh, that's a DSL, you could implement that with Macros"[0].

And then Clojure implemented Map literals and - what do you know - Clojure still has Lisp Macros!

[0] Efforts in this direction thankfully forthcoming: https://github.com/inaimathi/clj


This is all surface, little substance.

Map literals are 20 lines of Common Lisp [1], via the programmable in CL reader (that Clojure doesn't have).

Richard Gabriel's "Worse is Better" is a lot more relevant about "Lisp in the marketplace" than anything you've mentioned here.

[1] https://gist.github.com/rmoritz/1044553/b2d2d8e4b933cb0f6c3f...


Unfortunately:

- that code doesn't provide printing for hash tables. Oops!

- breaks tooling: - doesn't provide editing or syntax highlighting support. - confuses any tool that reads all of the files of a project using the Lisp reader.

- it gratuitously introduces a vector syntax that is unnecessary since #(1 2 3) is perfectly fine and standard across more than one Lisp dialect.

- hard-codes the #'equal equality

- doesn't play with the newer ways of registering Lisp read syntaxes, like https://github.com/melisgl/named-readtables


> and stuck with Common Lisp's woeful hash table interface and the loop macro keywords

You are not stuck with anything in Common Lisp.

Instead of loop you have lots of options, from the built-in constructs (do, dolist, map...) to very powerful libraries like SERIES and ITERATE.

You are not stuck with the default hash table interface either. Many libraries for data structure are available, you just pick the one that suits you best.

>And then Clojure implemented Map literals and - what do you know - Clojure still has Lisp Macros!

Most of those Clojure features are already available in CL through the use of libraries. Even the ability to easily call Java libraries, by using the Armed Bear Common Lisp implementation.


I think that Beating the Averages demonstrated that Lisp was PG's Blub, and maybe he wasn't using a language with strong static typing or logic programming or something like that for the same reasons that Java programmers wouldn't choose Lisp.

I don't think that the fundamental flaw with that essay is the arrogance so much as the assumption that "language power" lies on a continuum in the first place.


Please reconsider your assumptions. PG has written two Lisp books in the nineties, one of them containing a Prolog interpreter. In fact, many Lisp programs utilize some form of logic programming. With regards to "strong static typing", types are but one way to think about things. Since Lisp affords powerful ways to shape the language, there is a culture of implementing and thinking about various paradigms, typing included. For example, the first Haskell implementation was written in Lisp, and there are other explorations of that particular paradigm. All programmers are blub programmers to some extent, but Lisp has the tools and the culture to fight it.


I apologize that my tone came across as dismissive. Chapter 24 of my paper copy of On Lisp does contain a "sketch of [Prolog's] implementation". I can certainly see the value in Lisp and the culture of bottom-up programming (there's a reason I own a physical copy of the book).

I shouldn't have brought up logic and statically-typed programming as specific examples; my point was more that it's misguided to consider the "power continuum" as one-dimensional, where languages lower on the continuum are "dumb" and languages higher on the continuum are "weird". As you said, a language with good types is not necessarily more or less powerful than a highly dynamic language.


There was nothing special about the language chosen for viaweb. It was more the ability to understand what their endusers wanted better than their competitors.


If their competitors were writing their applications in C++ or Perl, it probably would have made a noticeable difference to their iteration speed. Writing a web application in C++ (especially in 1995) would not have been too efficient.

I'm sure that they had a good understanding of their customers, but the language had to have made a difference as well. These days, we're writing web apps in languages like JavaScript and Ruby, not C++, so the difference would be a lot less significant.


Knowledge of the customer and programming ability is the most important driver of success.

The idea that LISP was a secret sauce for viaweb does not hold up under scrutiny.

If you have access to lexisnexis to lookup news articles from the era, it’s easy to see that.


Google is known to write a lot of their web apps in Java, which is very similar to C++.


>I think Reddit's decision to move from Common Lisp

That original Reddit system was very simple, yet the Lisp source code was of poor quality (it is currently available on the web). When it was released, the lispers at comp.lang.lisp said they could write a clone in a few days. One of them did it in about 2 or 3 hours (true story).

So it wasn't good code. Moreover they, for some strange reason, chose to develop using a different implementation than the one in the server, and things like that.

Then Reddit got a famous name recommended by PG, who was a Python expert. So he rewrote the thing in Python and obviously, being a Python expert, did a good job.


My understanding was that Steve mostly wrote it in Lisp to impress PG, and then Aaron convinced Steve to rewrite it (although Aaron was kind of a Python fanboy and it's telling that the rewrite only took a weekend).


Do you know of any of this 'superpower'code released somewhere? Like is it possible to study the original viaweb code?



Thanks. Looks neat. Will see a bit more of it. Of course the next question is, why they abandoned it, and reading this https://news.ycombinator.com/item?id=3815491 My translation of this is they had bug which came from the libs which they couldnt fix.


Maybe thid is not a showcase of 'superpower'. From what I read it took a whole summer to write in lisp and the rewrite in python took a weekend. (But will try to double check this)


>From what I read it took a whole summer to write in lisp and the rewrite in python took a weekend.

And one guy in comp.lang.lisp wrote a working Reddit 1.0 clone in Lisp in 4 hours:

https://groups.google.com/d/msg/comp.lang.lisp/9WD9-yEaqMs/H...


It didnt seem to be online for a significant amount of time? How would one know it worked (and was mantainable)?


I would like to consider Python a common lisp alternative. But it's one or two orders of magnitude slower and is not really standardized. I think there is a lot more that's attractive than just macros.


...and Python doesn't have it. Image-based programming is the other thing besides macros that gives Lisp an edge. Python might as well be batch-compiled.


> and is not really standardized

What do you mean by this ?


My understanding is the Python standard is essentially whatever C Python chooses to do. They do write the changes to a specification separately, but essentially its one moving implementation, with alternatives that have compatibility limitations.


Indeed. I'm curious what Python 4 will do to Python 3.


"standard" is "whatever CPython managed to miss a segfault for today".

And PyEvalFrameEx is the stuff of nightmares that can't be really fixed because it hardcoded spaghetti that, if rewritten, could subtly break python in myriad ways that would make 2->3 migration a cakewalk.

Oh, and there's no real test suite to verify if you didn't break something subtle in implementation.

While parsing Ruby code is well known to be problematic, there is a very good reason why Ruby can have interchangeable implementation while Python pretty much can't.


these days many newer languages have macros: lisp is not alone


We haven't figured out macros yet.

There's a perceptual litmus test here. Some people see life on earth and see God. Others see the faintest exploration of the design space, an impoverished code base that made it impossibly far. A miracle, perhaps, but not what we'd see in a billion runs of the simulation.

Same with Lisp macros. Their few variants look bolted on. We haven't begun to explore the design space, yet the best macro systems in other languages are copies of what lisp has figured out so far.

There's an 80:20 rule for language emergence. Ecosystem questions aside, a Turing-complete language needs to start by getting one thing right. APL leveraged multidimensional array handling into a general purpose language. Scripting languages gain much of their leverage from nested hash tables.

Concise versions of monadic parsing are exhilarating in their power and expressiveness. If a language was designed so parsing and manipulating itself was its sweet spot, generalizing this parsing to typed trees would be easy, and we'd invent many new programming paradigms. A general purpose language would follow.

Lisp, an implementation of the lambda calculus, is not this language. Lisp is a machine language precursor to this idea, with various macro systems bolted on.

Haskell, an implementation of typed category theory, is not this language either. At the expression level, Haskell is stuck at a few hard-wired pattern matching primitives. Haskell does not make the assumption that everyone has learned to compose arbitrary parsing operators.


> We haven't figured out macros yet.

the racket lang team has investigated this heavily.

https://www.youtube.com/watch?v=ABWLveMNdzg

https://www.cs.utah.edu/plt/publications/popl16-f.pdf

https://www.cs.utah.edu/plt/scope-sets/

> Haskell, an implementation of typed category theory

http://math.andrej.com/2016/08/06/hask-is-not-a-category/


> yet the best macro systems in other languages are copies of what lisp has figured out so far.

These are going to be controversial assertions in a Lisp thread, but I believe

1. C++ Templates + Concepts is a good Macro facility

2. Most of what makes it good is not borrowed from Common Lisp

On the other hand, I agree strongly with this assertion:

> Scripting languages gain much of their leverage from nested hash tables.


I've been pondering this comment, and I'm not sure I agree that C++ Templates + Concepts are really equivalent to Lisp macros. They're about enabling generic programming with type constraints (when you add concepts, in particular). This seems more akin to Common Lisp's defgeneric and defmethod, where you can specialize your implementation based on the type. Like, assuming some class hierarchy of foo (base) -> bar -> baz:

  (defgeneric do-something (object))
  (defmethod do-something ((object foo))
     (something every foo can do))
  ;; bar is the same as foo
  (defmethod do-something ((object baz))
     (something only bazzes should do))
Templates and concepts are certainly a powerful tool, and I personally like them (as I've come back to C++ after a long break). But macros let you do something else entirely. You can create entirely new syntactic structures, or modify existing functions (see Norvig's PAIP, chapter 9, where macros are used to memoize recursive functions), or any number of other things.


> I'm not sure I agree that C++ Templates + Concepts are really equivalent to Lisp macros.

I think I said as much in my comment.

> This seems more akin to Common Lisp's defgeneric and defmethod

Except that Templates are statically dispatched and type-checked.

But also one can perform meaningful compile-time computation with them - I have used this in anger to implement stencils and iteration logic for partial differential equation solvers in 1-3 dimensions so I can say from experience that it is not a mere curiosity. And I don't think that CLOS really has any analog for the techniques used to implement std::tuple.

> But macros let you do something else entirely. You can create entirely new syntactic structures, or modify existing functions (see Norvig's PAIP, chapter 9, where macros are used to memoize recursive functions), or any number of other things.

I think the thrust of my comment is that Lisp programmers identify this type-blind AST-mapping functionality as the core of "a Macro System," while users of other languages (with more syntax to begin with) see this as somewhat secondary to the goal of type-checked compile-time computation.


If C++ had a good macro system, the Qt library wouldn't need the "moc" preprocessor to implement its "slots" abstraction. Templates aren't powerful enough to fill that role, even when you add Concepts to them.


My understanding is that moc is no longer necessary, but sticks around for legacy compatibility, but I may be wrong.


I don't think another language's macros can match lisp macros unless the other language is also homoiconic.


Latter-day Dylan is infix with a Scheme-y macro system: https://opendylan.org/books/dpg/macros.html

Cue arguments, I guess, about whether Scheme is a Lisp, whatever their standards say.


Clojure is weakly heteroiconic and seems to have rough parity


Clojure is “homoiconic”: Clojure source code is represented in terms of the data structures Clojure applications manipulate. There’s nothing preventing Common Lisp from having a similar surface syntax either: a couple read macros on [ and {, and a library of normal macros would be sufficient and could probably be written over a weekend.


In fact, since Common Lisp "source code" is NOT defined in terms of text but data structures (that happen to have standard serialization/deserialization) and that's the level that macros work on, you could theoretically even make semi-algolish syntax and still keep CL's macro system.

It would just look weird as fuck.



That's one example, not as crazy as what's possible. Though I have to say I dislike the indent based blocks :/


i am not sure: you can have "quoting" without this property


C has macros. They are nothing like lisp macros. Please cite a non-lisp language with macros a powerful as lisp's.


The major advantage of Lisp macros over macros in other languages aren't their power but the fact that using Lisp's macro system is virtually the same as using the rest of Lisp.

Lisp macros look like Lisp and in manipulating code via macros you can leverage the rest of Lisp and Lisp's ecosystem.

So writing Lisp macros is entirely natural for someone who already knows Lisp. You don't have to learn or use what is effectively a separate language, as you do with every other language that's not homoiconic.

As Alan Perlis once said, "Beware the Turing tarpit, where everything is possible and nothing is easy."


C macros and lisp macros (In particular, I'm thinking of hygienic macros à la Scheme) are two wildly different extremes. In between them lies a huge continuum of interesting things. There are Scala's macros. There are Haskell's language extensions. There are code generators like the ones used to implement much of gRPC, and there are compiler plugins like the ones used to implement aspect-oriented programming in C# and Java.

Most of them arguably qualify as being somewhere between an 80% solution and a 90% solution for the practical problems that lisp might solve with macros.

They're frequently not anywhere near as convenient to use or understand. But that's arguably a good thing in the long run, because it encourages communities to pool resources and work on a coordinated solution that can be shared by the community. My take on the curse of lisp, for what it's worth, is economies of scale. Lisp makes it too easy (and fun) to just keep your head down and work on your own thing, which hinders pooling of resources.



Scala, Nim, Clojure, Haxe


Clojure is a Lisp so it doesn't count.


Julia


Julia is lisp in drag so it doesn't really count, but I'll give you the point.


In what sense is Julia a lisp in drag? I'm just starting in on it it doesn't seem any more lispy than any other dynamic language. In particular I've yet to see a car/cdr.


Haskell, OCaml


Rust?


I'm using Lisp in production, our browser and all of our infrastructure is in Lisp https://github.com/atlas-engineer/next.

TO address some of your points;

poor ecosystem of libraries: many great libraries, some incomplete. you must not be afraid to look at the source code to understand and edit

poor community coordination: true, there are few large scale community lisp projects. be the change you want to see

dependency limitations with quicklisp: like what? quicklisp works great. if you are talking about loading several versions of the same library at the same time in a single image, this is being researched and developed

poor support for json: no, there is good support

have to restart the server every 20 days: not familiar, I have had lisp servers running for years now

hack to tune GC: no

if I had to redo my project would I choose Lisp? I would choose lisp, yes. there is no more powerful language than this, and believe me, I have tried many


The post [1] (though old) - hints at no good libs for json (mentions a workaround for one he picks)

  > For me, the most important quality 
  I need in a JSON library is an unambiguous, 
  one-to-one mapping of types. For example: some libraries 
  will deserialize JSON arrays as Lisp lists, 
  and JSON true/false as t/nil. But this means [] and false
  both deserialize to nil, so you can't reliably round trip anything! 
Although this post is from 2018, wondering if it has improved since then.

[1] https://stevelosh.com/blog/2018/08/a-road-to-common-lisp/#s4...


This just works:

    USER> (yason:parse "{\"null\" : null, \"false\" : false, \"empty\": []}"
                       :object-as :plist
                       :json-arrays-as-vectors t
                       :json-booleans-as-symbols t
                       :json-nulls-as-keyword t)
    ("null" :NULL "false" YASON:FALSE "empty" #())
Null is mapped to the :null keyword, false to a symbol named false in the yason package, and the empty vector to an empty lisp vector (in Lisp #(1 2 3) denotes a vector, whereas (1 2 3) is a linked-list). All threes can be distguinshed from the others.

All those options defaults to special (dynamically scoped) variables, so you don't have to pass them everytime.

Encoding prints to the standard output (you can redirect it to an http stream, if you want), so you don't see double-quotes being escaped, but the string is equivalent (modulo spaces):

    USER> (yason:encode-plist *)
    {"null":null,"false":false,"empty":[]}
The defaults in Yason just returns list for vectors, and map both null and false to nil, but this is not a problem: if you expect your field "items" to be a vector, then nil will denote the empty list, if you expect another field "enable-debug" to be a boolean, nil be good enough too. But in cases where you need to handle them differently, the library can help you distinguish among all kinds of value.

See https://phmarek.github.io/yason/


>The post [1] (though old) - hints at no good libs for json

That's his opinion. We have a lot of JSON libs, maybe too many. I use "jonathan" and i'm happy with it.

EDIT: He wrote: "JSON support in Common Lisp is a god damn mess. There are an absurd number of JSON libraries and I don't really like any of them."

He doesn't like any of them, but I do like more than one.


I've been using `jsown` for all my Common Lisp JSON needs. There are more JSON libraries and I think JSON support for Common Lisp is good, not perfect, but no worse than most other popular languages.

With regards to Steve's issues: ask yourself whether they apply to you.


That part of the post was wrong even in 2018. There were two libraries that had a one to one mapping out-of-the-box, and at least two others that could be configured to work that way.


Have you tried Haskell? What was your experience?


I've tried Haskell, my experience was mostly that of confusion :-)


Clojure has a good ecosystem and you can fall back to Java's ecosystem if needed. It has excellent support for json, and acceptable async support (I use manifold + aleph, but there are other options). It runs on the JVM which has world class GC.


I want to understand the complementary picture. Assuming Clojure as a good default, what would be some specific kinds of tasks/problems for which one might wish to avoid Clojure? And what alternatives would you recommend, for those cases? (be it another lisp, or otherwise)


I wouldn't use Clojure if fast startup time is important for you. So it's okay for servers, but not okay for (in general case) for command line tools. With GraalVM this is changed a bit, but then compilation of non-trivial projects with GraalVM is also non-trivial.

If you need a low memory consumption then Clojure is also not a great choice.

We've built an ecommerce site on Clojure though and it's been great. :)


SCI or Small Clojure Interpreter is a subset of Clojure meant for quick startup.

Babashka is a library on top of SCI and is particularly suited to building command line tools. It doesn't need the JVM, starts up in milliseconds, has access to a large part of clojure.core (including core.async) and can be downloaded as a 13mb binary.

https://github.com/borkdude/babashka


> not okay for (in general case) for command line tools

Babashka° is worth a look.

° https://github.com/borkdude/babashka


When considering Clojure like this it's important to remember that Clojure has many runtimes

It is designed to be hosted, so, a lot of the above doesn't apply equally for ClojureScript (as one example)


Clojure is not the best choice if you have a lot of number crunching code, unless you like writing Java.


Depending on your needs, I have generally found that between Fastmath and Neanderthal, I have no real worries about performance sensitive numeric code. Obviously if you start caring about CPU caches and specific instructions then you might struggle (I am writing a chess engine in Clojure for no good reason, and obviously it's not going to hit the level of throughput that engines written in C++ reach). There are annoying niggles, like Clojure not supporting all Java's primitive types, and obviously the wider support ecosystem for data science work is somewhat weak, but I've been impressed.


I delived a service in 2019 that was written 100% written in clojure. ReactNative app + SPA Web app + Server all sharing the same code.

The developer experience was great. I get subsecond hot-reload on all plataforms (app emulator/app on-device/web/server).

The same UI components was shared between ReactNative and ReactDOM. All App issues are reproducible in Web, then I fix in web and I'm sure that App is working.

Even SSR was possible, with some tweaks.


Got a public repo to share that would show how you pulled this off?

Barring that, as a sibling comment asks, what specific framework and/or libraries did you use to achieve the holy grail of sharing-all-the-things? (thought ReactNative required implementing separate UI for iOS and Android).

Sub second hot reload sounds amazing, btw.


Not sure this will help: ClojureScript RN app: (I think has desktop and all that as well) https://github.com/status-im/status-react


Would love to see a breakdown of your setup for this :)


Not OP but to achieve this in clojure you could use something like:

For builds and dependency management: Leiningen, boot or deps.edn

For the front-end (clojurescript): Shadow-cljs + reagent

For the backend (clojure): Nrepl and nrepl.cmdline, Ring, jetty + compojure for a REST API

Frontend and backend can share code with .cljc files that work on all clojure dialects. https://clojure.org/guides/reader_conditionals

Both nrepl and shadow-cljs support code reloading through the REPL connection.

You can compile to a jar file and deploy that, build a docker image, or compile to a native executable with graalvm, or deploy on Aws lambda with Cljs+Lambada. Lots of options.


Thank you sir. I am just learning Clojure right now , coming from JS and GO world.


What framework and libraries did you use?


CircleCI is mostly Clojure, and some Go.

The only problems we are seeing is the significant startup time cost (hence the Go bits, where it matters).

Memory usage also isn't always great, but that's more a JVM problem, as it's not releasing reclaimed memory as quickly as you'd like it to. Fine on servers, annoying on a 16GB laptop.

The Common Lisp problems you outlined don't really apply to Clojure, it's a modern lanaguage and has been stable and versatile for us.


Having used Clojure in production many years ago, I was very satisfied. And solid interop with existing Java libraries really brings in a whole ecosystem if needed.

Persistent data structures are really game-changing.


Is the new frontend also written in ClojureScript? Seems you've closed sourced the frontend (old source is here https://github.com/circleci/frontend) but you used to use CLJS, but maybe new one is written in something else?


I'm on the backend, but all I've seen of the new frontend is TypeScript. There has been a decision to switch languages for reasons that someone else can probably explain better than I could, a lot of us lurk here.


Rigetti Computing is a company that builds quantum computers.

We have used Common Lisp in production at Rigetti Computing for 4 years both in cloud deployment as well as for a downloadable SDK. Common Lisp is used to build what is currently the best free optimizing compiler for quantum computing [1, 2] as well as one of the most flexible quantum computer simulators [3]. They’re both open source, so you can actually see for yourself how we version, deploy as binaries, and deploy in Docker containers.

> If you are using lisp in production for non-trivial cases, do these issues still exist?

Before getting into the specifics, any large, non-trivial product that customers use will need tweaking. Especially languages that have runtimes, including languages like C. This is usually a result of leaky abstractions that just have to be dealt with in The Real World (TM).

> * poor ecosystem of libraries - few gems, most other half-baked

You have libraries for most things you need. Some are lacking, like in scientific computing. But all your usual stuff to glue things together is there and works.

Some people confuse “half-baked” with “no commits for 5 years”. Lisp libraries are older than many in vogue languages, and Lisp doesn’t change, so “5 years no commits” shouldn’t be worrisome.

At Rigetti, the biggest weight we had to pull that we wouldn’t have needed to do in, say, Python, is bind to a bunch of FORTRAN code, specifically LAPACK. That was a pain because you need to do FFI with complex double floats.

* poor community coordination

The community all agrees on one thing: Lisp is standardized.

Lisp lacks a sort of “centralized community” like Go or Clojure. I like to think that Common Lisp is more “democratic”. We have a “constitution” (the Common Lisp standard) that we all agree on and we all make our own contributions atop that.

With that said, #lisp on Freenode is attentive. SBCL, for example, responds to bug reports almost immediately. Several library ecosystems are kept up to date, etc.

> * Dependency management limitations with quicklisp

With Quicklisp, I personally realized a lot of other dependency management solutions are overblown and don’t actually solve problems. Maybe QL gets away with it because the ecosystem is smaller than others.

Xach, the creator of QL, does a lot of community diligence for free to ensure things work.

For production code, I’ve had 0 issues.

> * poor support for json

We use YASON [4] and it works fine.

> * poor support for async

This is true now but wasn’t true years ago. Lisp implementers have not been interested in implementing a runtime supporting async programming. Little in the Common Lisp language itself prohibits it. Before the world of threads and SMP, Lisp implementers did implement async runtimes.

Why aren’t Lisp implementers interested in async? I’m not sure. Maybe they see it as a passing fad?

> * have to restart the server every 20 days because of some memory leak

Memory leaks can happen but they’re usually the programmer’s fault for not understanding symbol interning, weak pointers, or finalizers. Writing long-running applications does require know-how.

> * hack to tune GC

I don’t consider it hacking. In SBCL we tune the GC to our application. This seems reasonable; different programs have different allocation patterns. But you wouldn’t need to do this out of the box.

> is there a way you can quantify effort is resolving them, and if yes, what is it?

Big applications that are customer products require care and feeding. The benefits of writing in Lisp in the first place largely outweigh some of the extra work needed to productionize it.

I will say that writing production Common Lisp code is greatly benefited by having at least one experienced Lisper on the team. If a team would, for some reason, write an application in Lisp but not really have a good understanding of the language, then productionizing and hardening will be difficult.

> and, finally, if you had to re-do your project, would you chose lisp or something else?

Common Lisp allows something that never existed before (a completely automatic, optimizing quantum compiler) to exist. If Lisp weren’t there, that wouldn’t exist.

My parting advice is this: The biggest difficulty in writing an application is to actually write it. It takes a lot of hard work to get requirements, listen to feedback, articulate a direction, etc. In Common Lisp, SBCL or LispWorks or whatever else, dumping an executable is so easy and you can have a deployable “production app” in 5 minutes. So the good news is that you won’t be held up trying to figure out how to dockerize your Python app.

I definitely agree that before embarking, doing an objective and relevant comparative analysis would be good (we did this at Rigetti), but ultimately you just need to sit down and write.

Almost every programming problem is solvable these days with enough glue and elbow grease. I wouldn’t be too worried.

[1] https://arxiv.org/abs/2003.13961

[2] https://github.com/rigetti/quilc

[3] https://github.com/rigetti/qvm

[4] https://github.com/phmarek/yason


Ping me if you need another experienced Lisper. ;-)


How complete are your LAPACK bindings?


They’re complete in the sense that all standard functions are exposed. However, there’s only a modest “high-level” interface.

The library: https://github.com/rigetti/magicl

Comparison with NumPy: https://github.com/rigetti/magicl/blob/master/doc/high-level...


Thank you for your detailed insight


Perhaps you didn't mention it for a reason, but I am sure I won't be the only one to bring up Clojure in this thread, and others will argue for and against more eloquently. Obviously it lacks much of what makes people love Common Lisp, but it's more Lisp than not Lisp, and specifically addresses all of your points via the JVM and its ecosystem.


I’ve not used lisp in production. But I sometimes wish I did. In particular, whenever there’s an issue with a production process (think a batch job rather than some kind of online processing), I wish I could attach a debugger, inspect some variables, and invoke a restart. I don’t know how of any language that makes it more convenient to do that sort of issue-resolution.

I don’t really care about libraries. I think generally it’s not very hard to write an ffi binding to a C library and most of the time you don’t need a library anyway. Most of the time you can just write your own library from scratch as you probably only need a few functions.

But maybe this is more a statement of the sort of programs I work on and if you worked on different systems you would want to use lots of external libraries that you don’t control.

I think the biggest issue with CL is how tightly attached it is to an outdated standard. It makes it harder to have things like good Unix support if your interface is through pathnames and streams.

Other thing which may be annoying about CL are the lack of a good way to consistently go from source code to a binary and that compiler optimisations can be unpredictable


> consistently go from source code to a binary

Can you talk more about this?


Let’s say I have a big repo full of lisp. I want a program that uses some subset of this lisp. Similarly to if I had a big repo full of C programs and libraries and I wanted to build one of those programs (statically linking all the libraries it used). In CL, it is annoying and tricky to go from those files to an executable I can deploy. Typically you’d write some script to (use ASDF to) compile/load your various files and then dump the image specifying a binary. It’s a bit messy and not very portable.


I will never understand how people have such a rosy view of C development. Is it just that today people think there is only GNU make, and only GCC or clang?

Autotools used m4 because it was the most portable Unix tool, not because it's a good language to write your build scripts in...


I have had problems with C, but building my code was never one of them. You are right that I have only used GCC, Clang, and Visual Studio.

I thought they use M4 because they were doing sophisticated preprocessing?


I misremembered; M4sh (which uses m4 to generate portable bourne-ish shell scripts) is what was the lowest-common-denominator for autoconf.


Perhaps C was a poor example. I just mean “statically compiled language where files become objects and objects get linked into executables/libraries and this is done by a build system”.

Perhaps rust would have been a better example as there is less variety in build systems.


Makes sense. I have noticed that deployment and compilation part of lisp isn't talked about often.

The method you described is the best one I have found (load everything and dump an image.)


>I have noticed that deployment and compilation part of lisp isn't talked about often.

Because it's trivial.

Compilation is almost implicit, when loading a file.

Creating an executable is only one line of code.


First, thanks for linking my Living Common Lisp book. Common Lisp is my preferred language, but only for projects where I need/want an environment that support rapid experimentation of new ideas. If I know how to implement something and I want to deploy at scale, then I might not choose Common Lisp.

I am a happy LispWorks customer but I also use SBCL (efficient code, easy to package standalone apps) and Clozure Common Lisp (fast compiler for development).


Can't speak for SBCL (I use it exclusively for development and haven't deployed anything non trivial and user facing with it) but I have used Clozure CL (CCL) successfully to develop back end services without horror stories regarding the GC. If you end up needing huge heap sizes, in theory with ABCL you get access to state of the art GC algorithm implementations and (IMHO) a much nicer, less dogmatic language than Clojure.

I recently switched to LispWorks after struggling a bit with CCL's Cocoa bridge and I wish I had done that sooner. A much younger, idealistic me would scream at the thought of it but sometimes you just need to throw some money at a problem. CAPI is really a killer feature.


We use CL for production (embedded/distributed system). CCL and Lispworks on low end ARM32.

We do use JSON but mainly for the product's Websocket API towards higher level SCADA system. As we have control over both ends there isn't much trouble really.

The project has soft realtime component also implemented with Lisp, and at least Lispworks exposes a substantial degree of control over GC. We however didn't need to tune anything thus far.

The system has perhaps a dozen direct dependencies to the libraries, luckily few of them posed any challenges in use.


If you don't mind my asking, what company is this? I'm a Lisp hobbyist working for a SCADA integrator, and while I'm not in the job market right now I might be in a few years.


It's norphonic.com, based in Norway. The product is not on the webpage yet though.


Ah, pity. I'd love to visit Norway sometime (I've heard great things), but my family won't move across the ocean.

Anyway, thanks for the reply.


Apple has been using Clojure in production for years, and is hiring: https://jobs.apple.com/en-us/details/200000986/apple-media-p...

Excerpt from the description: "These are the people who power the App Store, Apple TV, Apple Music, Apple Podcasts, and Apple Books. And they do it on a massive scale, meeting Apple’s high expectations with high performance to deliver a huge variety of entertainment in over 35 languages to more than 150 countries."


I use lispworks, the CAPI works well on allmost all platforms. They know what they are doing. Books: common lisp recipes, PAIP, ANSI COMMON LISP. The few libraries you need are very good.And you need Quicklisp. I stopped using C, C++, pascal, fortran, javascript. Altough they all are just fine. You still have to learn the craft of programming. Just go for it.


I'm going to second the recommendation of Common Lisp Recipes. The others too, of course (especially PAIP).


Not really production because the traffic is low, but I run http://ichi.moe on SBCL (Hunchentoot web server) and I never had to restart it because of memory leak or a crash. I usually hot-swap the updates and the same server process stays up for months. It also heavily uses JSON and XML and the libraries for them (jsown and cxml) are perfectly adequate.


I'm starting a new project in Lisp in 2020. That's rolling my own CAD tools for learning hardware development.

Lisp makes sense because it's a comfortable environment for interacting with my data. I'll be running the Lisp code "offline" so I'm not worried about runtime aspects like GC pauses and most of my I/O will be generating data for external tools (e.g. KiCad) in their own esoteric formats.

Coming back to Lisp it's nice to see that all my favourite software is still available and working fine, and there seem to be more active open source Lisp developers than ever. (Far fewer than Javascript, for example, but that's not my yardstick.)


I am running a web app in production, with live reloads, no issues so far.

> * poor ecosystem of libraries - few gems, most other half-baked

I actually had the choice for my libraries: DB interface, email sender, HTML template, data structure helpers… these ones with good documentation.

I have observed and explored the ecosystem for the last 3 years and I still find hidden gems. There are way more libraries than we can see at a quick glance.

We do a selection on https://github.com/CodyReichert/awesome-cl

> * poor community coordination

It looks like the oldest ones don't coordinate much and the youngers do (and push the established players, such as the CL foundation).

> * poor support for async

Maybe not for a full async runtime (cf Turtl feedback), but there is a good choice for all sort of async stuff (bordeaux-threads, lparallel, lfarm, channel, SBCL built-ins etc)


I use Clojure and ClojureScript in production. The developer experience is a real pleasure: there is good library support, libraries are typically easy to use, and you can always interop with the underlying JVM/js if the need arises.

Clojure also has the core.async library. This can be used along with Reagent to build React-based apps; core.async allows you to do what Redux does but in fewer lines of code.

https://github.com/deckeraa/OpenStainer is an example of building apps this way.


Clojure is a production Lisp in 2020


Our health benefits startup has been using Clojure in production for some time. Efforts are ongoing to migrate the last bits from other JVM languages to Clojure, but ClojureScript is already ubiquitous.


I used to work in Common Lisp (SBCL).

The ecosystem problem meant something different to me than most. Sure, when new tech came out (Kafka to pick a random example) there wouldn’t be an open source library ready for me. But I had stable infrastructure so I didn’t really care much about that.

The ecosystem problem for me meant that if something went wrong I was on my own. It’s a pretty unfortunate feeling to search for a bug and find nothing on Stack Overflow.


> It’s a pretty unfortunate feeling to search for a bug and find nothing on Stack Overflow.

Par for the course in embedded. So isn't going to tell you why the kernel driver for the ABC123 chip is locking up on the XYZ15 eval board from Such-and-Such vendor.

You debug it yourself, whipping out JTAG debuggers and oscilloscopes, if necessary.

Because I work in embedded, I don't understand the whole ecosystem fuss.

I mean, joy of joys, you have the darned source to almost everything nowadays. That makes everything so easy, that all else is a footnote.


I'm coming at this from the opposite side. Currently, it feels like most of my programming skills are really just Googling skills and remembering what a particular error message means. The market emphasizes this kind of programming by combining existing libraries and hoping things work, so it's really hard for me to break away from doing that and start to learn the way that stuff works internally. I think that this was the reason that they stopped teaching SICP in Scheme [0].

I can read source code (I do some security research for fun) and use a debugger, but I still feel like I'm programming by poking. What would you recommend to younger programmers to get out of this rut?

0: In this talk at the NYC Lisp meetup, Gerry Sussman was asked why MIT stopped teaching the legendary 6.001 course, which was based on Sussman and Abelson’s classic text The Structure and Interpretation of Computer Programs (SICP). Sussman’s answer was that: (1) he and Hal Abelson got tired of teaching it (having done it since the 1980s). So in 1997, they walked into the department head’s office and said: “We quit. Figure out what to do.” And more importantly, (2) that they felt that the SICP curriculum no longer prepared engineers for what engineering is like today. Sussman said that in the 80s and 90s, engineers built complex systems by combining simple and well-understood parts. The goal of SICP was to provide the abstraction language for reasoning about such systems.

Today, this is no longer the case. Sussman pointed out that engineers now routinely write code for complicated hardware that they don’t fully understand (and often can’t understand because of trade secrecy.) The same is true at the software level, since programming environments consist of gigantic libraries with enormous functionality. According to Sussman, his students spend most of their time reading manuals for these libraries to figure out how to stitch them together to get a job done. He said that programming today is “More like science. You grab this piece of library and you poke at it. You write programs that poke it and see what it does. And you say, ‘Can I tweak it to do the thing I want?'”. The “analysis-by-synthesis” view of SICP — where you build a larger system out of smaller, simple parts — became irrelevant. Nowadays, we do programming by poking.


Edit-[optionally compile]-Run-[optionally step in Debugger] is addictive because the feedback loop is so tight, and it always feels like work. The way to break the addiction is to start working in a domain where there's significant friction to that process.

Work on a heavy-ass C++ GUI application that takes forever to load, with a big-ass code base that takes forever to compile, and you'll be thinking deeply about code in no time, because you can't program that sucker by poking when poking takes 40 minutes.


I think it’s about alternatives and expectations. In the embedded world you don’t have an alternative, so you expect that pulling out the JTAG debugger will be necessary. On the application side I have tons of alternatives that are extremely well supported, setting the expectation that most of the time I can focus on my application and not something weird like debugging my Postgres driver.


I really hope that Lisp will get a comeback in the space that is now occupied by XML and YAML. It's always the same story: Narrow minded engineers chose YAML (formerly XML) as the trendy simple configuration language and it works fine for a while. 3 years later they inevitably hit the ceiling with YAML and either start badly implementing new language features on top of YAML (see Gitlab CI's `when`, `only`, `except`, etc. syntax) that are never flexible enough or write hugely complicated templating tools (see Kustomize and Helm for K8S) for things that would amount to a simple if-expression in Lisp. This is the use case where Lisp can really shine. Configuration is almost always untyped and doesn't start out with a library ecosystem. Most of the time you just need a way to write a simple nestable DSL that is human readable. There's probably no language that makes this simpler than Lisp: Any college freshman could write their own interpreter. But it is also extremely flexible if it turns out that you need more power than YAML offers.


It's amazing how people who call themselves professional software engineers can be unaware of the fact they are accidentally writing a (horrible) programming language. People with computer science degrees look confounded when you even let the letters "DSL" pass your lips.


I don't think XML has been 'trendy' for ages :)

XML is totally fine for storing structured dumb data. Don't program in XML.

I would not be so bullish about implementing your own interpreter - as the best obvious choice every time.

Numbers especially are hard to get correct unless you know what you are doing.


I don't think this user is advocating using home grown interpreters. Rather it's easy to understand how they work completely.

The point is complex configurations tend to grow into a poor programming language. They would be far better as a lisp DSL.


Pretty much everything in CI is driven by YAML or JSON or YAML string as a value in a JSON property and they uniformly suck.

So I think GP is correct.


> poor ecosystem of libraries - few gems, most other half-baked

Definitely no longer an issue for Common Lisp. Quicklisp is da bomb.

> Dependency management limitations with quicklisp

Never had any issues. It Just Works for me.

> have to restart the server every 20 days because of some memory leak [3]

You can get memory leaks in any language. You're much less likely to have a leak in a GCd language than a non-GCd language. But seriously, if you can't easily restart your server whenever it's needed, you have bigger problems.

> hack to tune GC [5]

I've run CL in production for many different applications. I've never had to tune a GC.


ITA software's account of using CL may also be of interest,

http://www.paulgraham.com/carl.html


try hy - https://docs.hylang.org/en/stable/

Most of my production code is python, but I also wanted to create a dsl so I used hy. You get to enjoy both worlds


I tried to get hy but I just got confused.


Thank you for your responses. I was interested in learning about these concerns for Common Lisp only, and not the other dialects including scheme, clojure, racket ...


This may not squarely answer your question, but when I was investigating this for a production project, the following tools seemed to be both essential and well-supported.

SBCL, SLIME, SLY, ASDF, quicklisp, and ultralisp

Other tools and solutions are available, but familiarity with this seemed to be what was required for comfortable use.


My very early stage startup, whiplash.gg (tune in today at 2pm PT to see us in action) uses Clojure on the backend.

Ladder Insurance is a full stack Clojure shop and Watchful has a bunch of Clojure in prod too.


I made a few hobby projects in different Lisps, but in the end for my field (REST backends) I would chose Erlang and today I also started looking towards Kotlin. Lisp's macros while being powerful are still creating a lot of trouble in terms of understanding the system, working with other people and finding support online. Also I personally didn't meet many developers who have solid understanding of FP, not even talking about macros - they are simply too complicated and don't allow human scaling.


Why Kotlin and not Elixir + Pheonix for web backend?


I have some experience with Elixir, you can think of it as nicer Erlang. Kotlin is interesting because it gives you access to Java (Android & Desktop) world and is closer to functional languages while still being widely usable.


kotlin makes spring boot much more usable


Have you considered lisp dialects? I've been working with Clojure/Clojurescript for a while and it addresses most of those concerns. It's been a real pleasure to work with.

- There are lots of high quality libraries, but you also have access to the entire Java and/or Javascript ecosystem with very easy interop

- Not too sure what you mean by "community coordination", but I've found the clojure community to be exceptionally friendly, welcoming, and responsive. The core team is active and I've even had some questions answered directly by them in slack!

- There is some tooling built and maintained by the core team that makes managing dependencies painless

- There's a library (transit) that's widely used and well suited for marshalling data in and out of a clojure app. There's a library that's part of the core which implements the CSP (communicating sequential processes) concurrency model, similar to Go. It works in both Clojure and Clojurescript and is a very sensible way to manage async operations in a system.

- GC is managed by the JVM (for clojure), which is state of the art


The problem with malleability of programming languages, is there is no upper limitation of how malleable it can be.

I experienced this in both Clojure and Ruby as well. The fact that developers can spend countless of precious time, cycling through a loop of malleability, reforming an already functional product in production. It costs precious time and money, it is expensive. And more importantly, it is non-stop.

Other programming languages like Go are very conservative with feature changes, and imposes hard limitations by choice, the team/company behind it thinks the same way, that a cycle of O(n) is destructive.

And nowadays, you can't expect a developer to stay on a company for so long, so a sureball tooling to hit it hard, and precise, with less time and tinkering around is much needed than ever.

On the future of programming languages, we will see languages being used not because it is malleable, but because it is sure, precise and right for the job.


I tried Go and didn't like its opinionated nature. As a beginner, I disliked,

- I have to use every variable I declare, otherwise it won't compile (b/c golang authors decided this produces better code)

- Official docs confusingly suggest I structure imported files with the root as github.com/ (b/c golang assumes all code will be open sourced)

- I can only import one file per folder

- There is no built in map or reduce function. You're expected to use for loops everywhere because it is more explicit

Once I discovered the last point I noped out.


> - I have to use every variable I declare, otherwise it won't compile (b/c golang authors decided this produces better code)

> - There is no built in map or reduce function. You're expected to use for loops everywhere because it is more explicit

Go is an imperative language, where no implicit magic happens on the background, like auto creation of resources on runtime, every sequence needs to be manually defined by the programmer. "With imperative code, you’re telling your program how to do something. And with declarative code, you’re telling your program what you want to do. - Tyler Mcginnis, React Fundamentals", depends on the programmer, but I like to be in control.


Yeah I get that. I prefer declarative code. I really enjoy constraint programming, for example, which is anything but imperative. I had no idea there were folks who preferred imperative languages until I discovered the Go community. I always figured people choose imperative languages for lack of familiarity with more declarative ones.


Surprising that you mention this with Clojure. Do you find the churn to be from libraries or within the project code itself?

The Clojure language and core library is extremely stable, almost uncannily so. I can't think of any major upheavals it's had for a long time. Transducers from 1.7 maybe? But those fit in with the rest of the core library and didn't cause any issues.


The problem is not the language being stable or mature. I'm sure Ruby has been around since 1995, Lisp existed even before my parents.

With Lisp, you can create a "Hello World" program in variety of multiple ways. Sure you can refactor it, starting with how to print the letter "H", throw UTF-16 support there as well. At the end of the day, you wasted precious time and resources printing "Hello World".

Doing things in variety does not mean value.

Doing it right the first time means value, now imagine if it's a team.


I don't recognise the picture of Lisp programming qua Lisp. "Refactor" probably wasn't even a word you'd hear for most of Lisp's existence. The Common Lisp standard definition is clearly not churning. I used to have a mail signature when Python was relatively new

    There should be only one way to do it.  -- Python
    There's more than one way to do it.  -- Perl
    Do the right thing.  -- Lisp


I'm using Hy[0] for some API-accessible "glue code" (basically doing high level stuff with scipy.sparse and friends as building blocks).

[0] hylang.org


Your questions are well answered by Paul Graham in http://www.paulgraham.com/avg.html.

There are many Common Lisp and Scheme environments. Many have excellent JIT compilers. They all have their relative strengths and weaknesses. So it really depends on your specific use case as to which environment has the best support. For me the biggest factor is that Lisp is very productive for use by a small and competent team. In those situations they tend to create a DSL and evolve the system. PG's essay describes the reason Lisp fails many projects in the section "The Blub Paradox".


I'm a big PG fan but to be honest, him building viaweb in Lisp is 99% because: "Robert and I both knew Lisp well, and we couldn't see any reason not to trust our instincts and go with Lisp.", and not because of a competitive advantage in the language.

There are so many things that can go wrong in a startup, it's not the time to pick a stack or database you know nothing about.


Unfortunately, PG's post is orthogonal to what I am trying to find. My concerns are mostly around community and operational aspects AFTER adopting lisp


I use CCL and a set of helper functions (that take continuations) so I can quickly and easily generate bits and pieces of HTML from my shell. I have previously written the same helpers in JS, Python, and Ruby, but there was a certain elegance about the LISP version and that's the only version I use and maintain anymore.

I doubt anybody knows how often I'm using LISP with this little tool because they only see the resulting HTML, but it's been a big help to me.


Low traffic, but https://rl.gl is running production on SBCL (on kubernetes). Common Lisp is a joy.


Clojure direct tackles most of these problems and it seems to me (subjectively) that it's one of the most widely-used Lisps for actual production systems.


We use Clojure + ClojureScript for just about everything where I work. There are a few older applications that are in Javascript with NodeJS and we have one that is C# and .NET but the rest of the company pretty much runs on Clojure. As mentioned elsewhere, Clojure doesn't really have any of the issues that you mention.


Usually using Lisp in production means making use of either LispWorks or Allegro Common Lisp, and not the FOSS offerings.


Why?


Because they are the surviving commercial vendors from the Lisp Machine days, with a complete graphical developer experience and optimizing compilers.

If you want a full blown Common Lisp experience, with a GUI based REPL, GUI debugger with edit-and-continue, and all the nice stuff you see from Xerox PARC, TI and Genera demos, they are the ones to get.


CCL has its own IDE which I find preferable to those of LW and Franz. It doesn't have the GUI construction set that MCL had, and unfortunately the IDE is Mac-only, but I find it much more productive than Emacs/Slime.

SBCL has an exceptionally good optimizing compiler now. It probably generates faster code than LW and Franz in most cases.


The CCL IDE is indeed good. I played around with CCL’s Cocoa bindings a lot earlier this year, then decided to, as another commentator said, throw some money at the problem and bought a LispWorks license to get CAPI.


If I'm programming in Lisp, I actually want the same venerable Lisp-based IDE that I use for other languages.


You can run applications on the Web now.


Another good solution is to make a command line app with SBCL or CCL and use Radiance https://github.com/Shirakumo/radiance or something similar that opens a new browser window for your app.


Web is pretty bad for many (most?) use cases.


And why not SBCL?


Because it fails short of what was the overall Lisp experience that started with Interlisp-D, Lisp Machines and Macintosh Common Lisp with its graphical tools, structured data repl.


I am using CL in pre-production to avail myself of its ability to compile & load at runtime. I haven't used Lisp in production in a while (dating back to Zetalisp!), and the decades of perspective lead me to some surprising conclusions. I have encountered neither ecosystem issues (due to quicklisp) nor community issues (due to "addition by subtraction" in the Lisp online community). However, I have encountered plenty of issues due to dependency-hell issues in the non-CL software I'm using: Python 2 vs 3 vs Python package incompatibilities; Python 2 vs 3 Ansible/Vagrant container/VM/distro incompatibilities; node.js vs npm vs inter-node-package version incompatibilities; etc. CL has been gloriously stable, and given me substantially less heartburn and fewer headaches.


Oracle used Lisp [1] in the UIs for their gen-2 cloud product. They eventually hit a wall hiring the kinds of people Oracle wanted to hire (few of whom wanted to learn an esoteric language), and made the tragic decision to move new development over to TypeScript (and port many of the key existing surfaces for ongoing maintainability). Mixed results on that. Many who were around for both the old and new code bases report that, while the old one took more effort to pick up, it was easier to trace data flow and is less prone to creating weird state bugs.

[1]: https://www.linkedin.com/pulse/building-cloud-choosing-lisp-...


I'm not a lisp programmer, but I have used emacs in the past.

For people w/ lisp dev XP, can I ask this: Is the ".emacs/config bankruptcy" issue a general problem? Specifically, when you use so many powerful devices, macros etc; but with little predictable code structure, then further development that sits well with existing code becomes pretty difficult.


I’m not an experienced lisp programmer, but Emacs bankruptcy is imho avoidable. You must understand that people often configure their editors by the worst of Stackoverflow-driven coding... copy-pasting random snippets till things work, and accumulating cruft over time. Also, Emacs lisp is not particularly great (compared to other lisps) and Emacs configuration is typically done through a bunch of exposed global variables. YMMV.

A production codebase hopefully merits very different attitudes :-)


Also, frameworks seek to claw back some structure from the morasses.

It's kinda fine if every library you use is _slightly_ different if you can have a unified interface for how to use them (spacemacs layers system for me).

It is annoying figuring out which invocation of variables sets the tab-width for _this_ new language though...

Also, most libraries in the emacs ecosystem have become more standardised in their interfaces, so things aren't too different nowadays.


I think this is the problem. copy-paste a powerful snippet anywhere, no need to put it in a particular place or integrate it into an existing structure.

But the global variable thing is interesting; Does regular lisp differ a lot wrt snippets having broad, global effects?


Yazz Pilot (https://github.com/zubairq/pilot) was originally written in Lisp (Clojure) but we switched to Javascript, simply because it was easier to get third party developers on board with Javascript compared to Lisp


> have to restart the server every 20 days because of some memory leak

Hmm this seems like something super specific to their async setup, rather than 'common lisp leaks memory'


I just would like to add to all the great comments, that clojure also has access to the python ecosystem via libpython clj.


I've used Lisp for a production system I delivered one year ago. I used CCL as an implementation and had a lot of success.

Here i'll cover your points:

> poor ecosystem of libraries - few gems, most other half-baked

I'd say the ecosystem is high-quality. There are relatively few libraries (around 2000+) compared to other ecosystem, but on the other hand most of them are either interesting, or unique (wouldn't exist in languages that lack certain features), or really well-written. Or many of those qualities at the same time.

In the absolute sense, there are enough libraries, IMO.

If you need something that isn't available, you have at least three options:

1. Bind to a C library; for this, Common Lisp has many facilities/libs for this, some of them make this task rather easy.

2. Same as (1) but Use ECL (Embeddable Common Lisp) for integrating a C program with a Lisp program as a whole, easily calling C from Lisp and Lisp from C, etc.

3. Use the java ecosystem and use the Armed Bear Common Lisp (ABCL) implementation. This implementation makes calling java libs a piece of cake, basically you call a java method with just one line of code. Or instance a class with just one line of code, etc. Really easy.

> poor community coordination

That's a bit meaningless. There are really good quality books and resources, i think that's what counts.

> Dependency management limitations with quicklisp

Quite the opposite. Quicklisp is awesome, better than PIP or NPM. It "just works".

> And some specific red flags like:

> poor support for json[6]

The opposite, many libs for JSON. The one I use, I like a lot.

> poor support for async

What does this mean? You can use cl-async, but i wonder what would be the benefit when in CL you have actual threads and great libraries for concurrency like lparallel (which introduces parallel processing constructs) or chanL (which gives you an easy to use channel implementation).

Cl-async allows you to leverage the whole power of libuv (just like Node does), but why? I'd rather use threads.

> have to restart the server every 20 days because of some memory leak [3]

Never heard about such things.

Perhaps you are confusing the "condition - restarts system" (which is the Lisp exception handling system) with having to restart something?

> hack to tune GC [5]

The URL you cite says "We had to tune the generation sizes".

Any respecting developer who wants to release a high performance production quality service NEEDS to tune the GC. Tune the GC is not a "hack", is just configuration parameters. Ask any Java developer who has had to "tune" the JVM for best performance.

By the way, everybody raves about Golang but it has a GC system that is comparatively outdated to the ones in most Lisp implementations (or the JVM). For example CCL has a very good generational garbage collection.


> * poor ecosystem of libraries - few gems, most other half-baked

Specific to common lisp: I don't find the ecosystem to be poor but I notice that many are turned off by the look and feel of library landing pages which do feel dated. In same cases I have felt the libraries lacked documentation, like Caveman2, but in the end I found there was little to document after reading some of the source code.

Specific to Clojure: I don't think this criticism applies to Clojure at all and the popular Clojure libraries usually come with great documentation as well.

IMO, JavaScript is also littered with mounds of crappy half baked libraries as well but people still use it for server side stuff even though they have a choice to use something else.

> * poor community coordination

Which lisp community?

> * Dependency management limitations with quicklisp

What limitations? Also, if you don't like quicklisp or common lisp in general, I think Clojure really nails dependency management and find it very easy to use.

> * poor support for json[6]

Specific to CL: I can understand this because there are a number of libraries and no obvious winner. JSON isn't the only data serialization format though. Guess it depends on what you need. https://www.cliki.net/serialization

> * poor support for async

Specific to Common Lisp: There is cl-libuv: https://quickref.common-lisp.net/cl-libuv.html and wookie.

Specific to Clojure: core.async is great and if you don't like it you are also free to use node.js as your runtime with clojurescript.

> * have to restart the server every 20 days because of some memory leak [3]

If I had a dollar for every node.js production app that suffered this very same problem I could probably retire today. I thought it was interesting that the author of that article says the service was re-written in node.js given node.js is hardly immune to these sorts of problems. IMO, async is just a pain to work with and these types of issues are a manifestation of that.

> * hack to tune GC [5]

Users in JVM land also report having to resort to tuning the GC or selecting the optimal GC for their use case. I don't think this is a reason to shy away from lisp.

> If you are using lisp in production for non-trivial cases, do these issues still exist?

I have not personally encountered these issues in Clojure or CL. That is not to say they do not exist. But they also exist in plenty of other popular platforms to varying degrees. See my responses above for examples.

> and, finally, if you had to re-do your project, would you chose lisp or something else?

I would still choose lisp, either Clojure or Common Lisp. I generally favor Clojure to CL.


KDB+ is Lisp under the hood. Please count as highly successful Lisp used in production.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: