Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Lisp? (atlas.engineer)
271 points by susam on Nov 12, 2021 | hide | past | favorite | 273 comments


I like the term "Eternal Language" - programming languages where if you write code now - you will be able to compile and use that code ten years from now (a software lifecycle eternity). Common Lisp, C, C++, Fortran, (edit: Java) are close to eternal languages. After losing a huge amount of Python 2 code (I wasn't going to rewrite it) I implemented Clasp, a Common Lisp implementation that interoperates with C++ (https://github.com/clasp-developers/clasp.git) so that I can write code that will hopefully live longer.

I am one of the few software developers with code that I wrote 27 years ago that is still in active use by thousands of computational chemistry researchers (Leap, a frontend for the computational chemistry molecular dynamics package AMBER, implemented in C).


In the realm of spoken languages, they say for a language to become immortal, it has to die. Latin is the classic example of this. Languages that continue to evolve will continue to "break" so to speak.

In this vein, Standard ML (SML) is truly immortal, because the standard is set and it is "finished".

Just a fun thought!



Until "modern classics" education became a thing, Latin was arguably a still living language, if in limited use (mainly among the clergy, certain professions and educated circles, and effectively official language of certain countries).

Then ~17th century modern classics turned latin education into navel gazing on the topic of bunch of roman republic/empire era works, and disregarded actually using it.


If I remember correctly there were still some scientific works being published in Latin in the 1900s


I really enjoy modern translations like "night-club" -> "taberna nocturna".


Awesome also that this is perfect spanish.


"modern" language teaching killed Latin

before that they were teaching it without any stress on grammar, mostly memorizing phrases and short sentences and learning how to use it


Common Lisp's extreme extensibility thanks to macros means new features can be added without breaking the standard.


Firstly, programming languages are always extended when a program is written, because a program consists of new definitions. This is true of SML. Any SML program consists of the language, plus that program's definition of symbols, which effectively create a local SML dialect consisting of standard SML plus the program's vocabulary.

Secondly natural language evolution isn't controlled by people who know what the are doing, or care about language immortality. They sometimes break things just for the heck of it.

Random examples:

- the "great vowel shift" in the development of English: a linguistic buggery that caused what sounds like "a" in just about any language that uses the roman alphabet, to be written using "u".

- random reassignments of meaning. For instance, the concept called "sensitivity" today was connected to the word "sensibility" just around two hundred years ago. That's pretty gratuitous; I could easily live with a branch of English which had kept it the way it was.


I could happily live with an English that didn’t accept literally to mean “not literally”.

Informal: used for emphasis or to express strong feeling while not being literally true. "I was literally blown away by the response I got."


But that English may have never existed! "Literally" has a long history of being used to mean "a kind of figuratively that is almost perceived like it's really happening" or something like that; perhaps as long as the word itself.

What I could live without is the dialect of English in which "literally" is used to explain away some figure of speech as not being a figure of speech, whereby, oops, there is actually no figure of speech to explain away.

Like "I literally just moved to this town yesterday, so I don't know my way around yet".

What is that for? Nobody suspects that you figuratively moved into this town yesterday; if you moved here just yesterday, then "I moved to this town just yesterday" is perfectly adequate.

(Now being born yesterday is a genuine figure of speech; we could make a meme in which a speech balloon next to a newborn infant's mouth makes a joke about literally having been born yesterday: that would be a valid use of literally.)

Since being blown away is a genuine figure of speech, the expression of "literally blown away" which refers to a situation which is still only figurative is a well-entrenched use of the word: hundreds of years old, I think. It says that the figure of speech is so fitting that if you imagine me being literally blown away, that is actually fairly accurate, so much so that when the inevitable film adaptation is made of my life story, special effects will necessarily hav to be used to portray it that way.


It’s just become a word of exclamation, hyperbolic at that, no? If someone literally moved to a town yesterday, they probably mean neither literally nor figuratively, but rather, just highlighting their short tenure at the new place.

I’m a bit of a language pedant, but find it difficult to argue with the society assigning new meanings to words as the language evolves. Modern English is a tragically malformed and bastardised version of Saxon mixed with French, after all.


You'd have to go quite a ways back for that one.

Charles Dickens, "Nicholas Nickleby" -- "[h]is looks were haggard, and his limbs and body literally worn to the bone"


We can look at it like this: kidding and hyperbole aren't bad grammar. Limbs could literally be worn to the bone; that they are actually not isn't a matter of grammar.

The character is fictitious in the first place, so we could argue that "his looks were haggard" is misusing the word were, since were may only refer to a person that existed. :)


Damn, great vowel shift sucks.

Learning English in school was really weird.

Finnish as a language is almost entirely pronouncially composable. If you know how every single letter is pronounced, you can also pronounce any word.

In English there are all these silent letters, different pronounciations depending on the position of the letter etc. And the vowel shift: "a" written as "u", "ai" written as "i", "i" written as "e" etc.


>Finnish as a language is almost entirely pronouncially composable.

So is Sanskrit. It is supposed be one of the most logically-designed languages.

>If you know how every single letter is pronounced, you can also pronounce any word.

Again, similar in Sanskrit. Taking it further, it is perfectly acceptable to make up words of your own by combining two or more words.


I think I've recently read that the amount of Akkadian/Sumerian material is vastly more, than anything the Latin period has produced. Yet almost nobody is able to interpret it, in comparison to Latin.


There's actually been a decent amount of work on SuccessorML to add a few things to the language.


I hope that Rust joins the pantheon of eternal languages. I feel like it has a lot of the advantages of a language like C, but is just more modern and well thought through.

Also, you did not mention Java. I have very old Java code that still works fine.


Java is probably the biggest "eternal" language we have these days. Probably tied with C and C++.

There's <<so much>> Java code churned out that it's unbelievable. Business systems have a ton more code than infrastructure systems.


> Java is probably the biggest "eternal" language we have these days

Is it? I haven't followed it in years but i remember reading about people sticking with Java 8 because later versions broke backwards compatibility in some cases.


Its a bit complicated, technically java language never broke backward compatibility. you can run existing java programs fine in higher java jvm's by adding extra command line parameters to include all existing libraries in default module. But it becomes complicated by the use of intermediate tools which are not updated to handle this.


Yep, modules were a big break, however all the relevant libraries that actually matter on the Java ecosystem are available on latest versions.

Basically it is the same kind of effect like school/university teachers whose idea of C++ is C with extras, and if they teach something more close to C++98 proper even in 2021, one should consider themselves lucky.


> [java] Probably tied with C and C++.

i think java wins over c/c++, because there's very little low level, machine specific code you can do in java, where as you surely can do that in c/c++.


True, and instead of rewriting the whole stuff, if some C or C++ is actually required there is always JNI, JNA, or the upcoming Panama.


I would check on cobol first.


There's a ton more Java out there. Java is much newer but it's from a time when we're talking about tens of millions of developers worldwide, instead of tens of thousands. A lot more people have been churning out Java since 1996 than Cobol since 1955 or so.


Do you have a feeling for how many Java LOC are out there?


I've worked for a 10 person company where 5 were working on a Java backend, the app was about 10 years old. 250k lines.

I used to work for a mediocre multinational with about 50 people working on a big Java front end to their many servers. Just the frontend was about 3 million lines.

You've never heard of either of them, the multinational had a market cap of about half a billion.

I've worked for a bunch of bigger companies but I don't have numbers, I just know they had bigger systems.

I wouldn't be shocked if there are tens of billions to hundreds of billions of lines of Java in production right now. I imagine that something like half a billion are added each year.

There are so many big Java middleware companies nobody has heard of. You wouldn't even know they use Java if you wouldn't look at the job listings.


If you multiply the 5000 LoC each developer at your 10-person company wrote yearly by the tens of millions of Java developers in your previous comment, you get much more than half an (American) billion


That's going to be quite a loaded metric considering how verbose Java is.


Java is at most longer than “short program” by a small constant factor, and that is mostly around project initialization.

Also, I have never really understood it — c++ is not the shortest thing either with duplicating headers/implementations, go’s error handling deserves a whole discussion in terms of verbosity yet these languages are not considered repetitive.


Well as a former cobol programmer, I would argue that it is no less verbose than pretty much any language.


C# is also probably in a similar league, for the same reasons.


> I hope that Rust joins the pantheon of eternal languages.

I'm torn about this. On one hand, Rust is so much better than C it is ridiculous and I really hope it does become a language with decade-long longevity.

On the other hand, Rust is the first new "systems programming" language in forever and is finally prying the door open to something other than C/C++. I'm really hoping that Rust opening the door and paving the way means that now we can get something better.

What worries me about Rust is the impedance mismatch down at the very primitive hardware level--bytes and registers. The embedded guys are doing an amazing job papering over it, but the abstractions leak through quite a lot and you have to twist things around to satisfy them.

The problem is: that's a lot of fiddly code with all kinds of corner cases. So, you either have a lot of work or you throw up your hands and invoke C (like Zig does).


I'd give it another 2-5 years before the syntax settles. Some serious improvements have recently been pushed out and I wouldn't be surprised if further improvements are on the horizon.


editions still allow backwards compatibility though, it's not like the syntax changes will break it


Rare example of broken backward compatibility: `cargo install gltf-viewer` doesn't work today because of an error in gltf library. The error is fixed, but gltf-viewer is not updated to use the fix.

   Compiling gltf v0.11.3
  error[E0597]: `buf` does not live long enough
   --> /home/vlisivka/.cargo/registry/src/github.com-1ecc6299db9ec823/gltf-0.11.3/src/binary.rs:225:35
      |
  119 | impl<'a> Glb<'a> {
      |      -- lifetime `'a` defined here
  ...
  225 |                     Self::from_v2(&buf)
      |                     --------------^^^^-
      |                     |             |
      |                     |             borrowed value does not live long enough
      |                     argument requires that `buf` is borrowed for `'a`
  ...
  233 |             }
      |             - `buf` dropped here while still borrowed


I'd like it to be so, but i dont have faith for it. rust doesnt have a standard like lisp/c/c++ and it stands to those languages for their eternality, and at least with lisp and C their relative simplicity makes all the difference for something like that. they feel like, despite how opposite they are, discoveries rather than inventions when compared to something so complex like rust and c++


I don’t think Rust will (at least not in current form). It’s breaking too much ground in the (real world) PL-design space. We just don’t know what really work yet!


Eh; sum types, everything-is-an-expression, and composition over inheritance get you very very far toward optimal language design. Sure there are some potential economic improvements that could made to Rust, but the foundation is so strong that I don’t see anything that makes those same design decisions supplanting Rust. More likely that a new Rust edition would incorporate those kinds of changes.


Java is younger than Python.


But Java 1.0 code is still (mostly) compile-able on recent Java versions, not so much with Python where I'd worry about going back 5 years.


Many Python developers and users were burned when the Python project decided to set fire to billions of lines of code.

I have zero trust in Python as far as code longevity is concerned.


More like trillions of lines of code, from probably millions of developers, dating back to the 90's.

"The total code size of Zope 2 and its dependencies has decreased by over 200,000 lines of code as a result." - from the 2013 Zope documentation... how many lines of code was it before then?

Python really burned its bridges. It's shown that it's a toy language now, and demonstrably unfit for any real production-quality projects.


> how many lines of code was [Zope] before then?

Soooo many. Zope was a very formidable code base to delve into. I was trying to learn it because my company was using Plone, and Zope quickly surfaced through the abstractions.

Zope's codebase might have been more accessible if type annotations were a thing back then. Their implementation of "interfaces" for Python were very interesting back in the Python 2.3 days.

I disagree that Python is a "toy" language now. It started out as one, and has been stumbling awkwardly away from that ever since the mid 2000s - virtualenvs, pip, pyenv, pipenv/poetry, type annotations, mypy etc.

In my entirely unscientific opinion, I think it was Django that started this journey, then of course scikit, numpy leading into pandas etc.


Both NumPy and Django were created in 2005, but Numeric, the ancestor of NumPy, is nearly as old as Python itself, and predates SciPy.


Bullshit. 2 => 3 transition was mostly mechanical and programmers who are unable to do it should get better.

I haven't seen py2 code in several years. Granted, I would look away in disgust if I did.


Right right right. I'll let my CTO know.


I wrote code that worked with both interpreters simultaneously for many years. I don't know why you'd worry so much.


> programming languages where if you write code now - you will be able to compile and use that code ten years from now (a software lifecycle eternity)

Fun fact: code written in JavaScript in 1995 will work in 2021, after 26 years.

If you are talking about "good practices" - few weeks ago I worked with C code from 2001 and it was just awful. Yes, it compiles - but it wouldn't pass any modern code review.


> Fun fact: code written in JavaScript in 1995 will work in 2021, after 26 years.

Sure, as long as you run it in Netscape Navigator 2.0.

In 2021 you're lucky if a JavaScript web app that worked fine on Tuesday still works on Wednesday. And shocked if it works in any browser other than Chrome.


It's plainly untrue. I worked on ECMAScript proposals and TC39 takes backwards compatibility really seriously.

Browsers too take backwards compatibility very seriously and features stop working almost only if these are serious security holes.


You're right about backward compatibility - and yet so many things break anyway. And many web apps of the past only worked on Internet Explorer (much in the way that modern web apps only work on Chrome.)

However, the thing that broke Tuesday's app on Wednesday was probably a library or framework change.


That's the DOM manipulation and browser behaviors, not Javascript itself.


If you divorce JavaScript from the Dom you have removed it's entire reason for existence. So the dom is effectively part of the language. Just as the standard library is effectively part of C or Nim, etc. Without them they are considerably less useful.


You remove the reason -for its creation-. Not its reason for existence; Node is a thing, and doesn't touch the DOM. So, no, it's not effectively part of the language.


Ah, but code written in Common Lisp in 2021 will usually work in 1995, after -26 years.

:)


Aside from CL implementation bugs. The implementations had lots of them back then, even the commercial ones. Nowadays, they are much better. Decades of effort to make an implementation bug free and shiny will have its effect.


That really is amazing. Is it true, even "usually", for most types of code? Both forwards and backwards compatibility.. it's like the holy grail.


For most types of code, other than things like code which has dependencies on implementation internals (like extensions in SBCL for instance), or uses FFI to access platform libraries that didn't exist in 1995; that sort of thing.

ANSI CL was last ratified in 1994; there has been no standard language change, and the community language improvements are just macro libraries. You should be able to take your alexandria utilities library, or optima pattern matching or whatever back to 1995.


There are a few community improvements that are language extensions. The standard had no notion of multithreading, MOP was not in the standard, and packages really needed some way to avoid collisions of short package names (which lead to the de facto standardized package local nicknames.) Pathnames may also have seen some de facto standardization; they were poorly specified in the standard.


Maybe the C code was just badly written


Yep.

Use some of the -W* flags (-Wall -Wpedantic -Wconversion, ...) and specify the standard -std=c90.

Avoid undefined behaviors:

- https://en.cppreference.com/w/c/language/behavior

- https://wiki.sei.cmu.edu/confluence/display/c

(Also use cppcheck, valgrind, gdb, astyle, make, ...)

Done.

Fun fact: JS and C are both standardized by ISO.


And always use "-fwrapv -fno-strict-aliasing -fno-delete-null-pointer-checks". Those are source of the hardest to reason about UB and the small performance benefit is not worth the risk. I would love to always be certain I haven't accidentally hit UB, but that's equal to the halting problem, so for peace of mind, I just ask the compiler for a slightly safer variant of the language.

P.S. and old code definitely needs them, as at the time compilers didn't optimize so aggressively and lots of code does weird stuff with memory, shifts, etc.


Have a look at Apache 1 (C) or KDE 1 (C++); it'd be interesting to see what you think of those codebases, and they can certainly predate 2001.


Maybe it's next to impossible to write C code that isn't badly written


Or maybe we have too many developers that treat language of their choice as a religion.


>To encourage people to pay more attention to the official language rules, to detect legal but suspicious constructions, and to help find interface mismatches undetectable with simple mechanisms for separate compilation, Steve Johnson adapted his pcc compiler to produce lint [Johnson 79b], which scanned a set of files and remarked on dubious constructions.

-- https://www.bell-labs.com/usr/dmr/www/chist.html

Unfortunately since 1979, majority of C devs think they know better.


C code is of course close to the hardware, which as always, has benefits and drawbacks.


I was pretty sure that there were some subtle changes in ES5 and ES6 that were technically not backwards compatible (done because very little code depended on those behaviors)


Out of curiosity: would the 1995 JS code pass any modern code review?


I’d imagine it would get pinged for using var instead of let or const. And depending on who’s doing the review they might not like ‘new’.


>"I’d imagine it would get pinged for using var instead of let or const."

This single thing in isolation should never cast code as good or bad.


JS from 2003 passed code review for me a few months ago... so, yes... maybe.


Probably depends what it's doing. If it's interacting with HTML and manipulating the DOM then probably not


How do modern frameworks display content? Must be interacting with HTML and manipulating the DOM in some way.


It's not whether they do, it's how they do.

For instance, in 2003 there was no such thing as a mobile phone; passing code review today is probably going to entail either being responsive, or directing to a mobile version when accessed from a mobile device.



Okay, I will clarify my hyperbole in the face of the pedantic - mobile phones did not provide ubiquitous, standardized approaches to browsing the web, such that UX guidelines for the web did not include them; media queries and such did not exist except as drafts, etc etc.


I think that to earn the title "Eternal Language" that should have to work in both directions.

If I am an expert in language X and take a solar powered laptop and go spend 10 years living as a hermit in some isolated place while working on some big X program, with no communication with the outside world other than idle non-technical chat with the people of the village I go to monthly to buy supplies, if X is an Eternal Language then

1. My code that I wrote as a hermit should build and run on the outside world's systems,

2. I should still be an expert in the language X, only needing to learn library changes, fashion changes (such as coding style changes), and tooling changes to be ready to take a job as an X programmer at the same level as I had before I became a hermit.

Alternatively, suppose I don't know X but wish to learn it. To earn the title "Eternal Language" I should be able to go into my library and read the "Learning X" book I bought 10 years ago but never got around to reading and that should mostly be equivalent to buying and reading a recently published book on X.


Go is another that strives for backwards compatibility: https://golang.org/doc/go1compat

After seeing the Perl5/6 schism, I was hopeful that the Python devs wouldn't make the same disastrous mistake, but.. the same happened with us with approximately 100k lines of Python 2 code (which we are now in the process of porting to Go and Rust), precisely for this reason.

It's exceedingly difficult to form trust in language developers who are willing to break working, production code in use all over the world for decades over a new unicode type and a few wishlist library items.


While Go strives for backwards compatibility, it's a bit too young for now at 12 years old, and already had a few changes around packaging and stuff like that. If we get to 20 years without Go 2 that will be a solid signal on the longevity of Go.


One of the newer (newer than C, C++, Common Lisp, ...) languages I feel comfortable about regarding this is Go. Definitely feels like I can write a simple Go program to generate a static website for example and it'll probably still work in that same state for a while.


It’s hard to say with any 1.x versions what’ll happen come 2.x


True, but... most languages wouldn't be on version 1.x after the amount of time (and progress) that Go has. Go has a track record by now, of being very careful about breaking things, and providing tools to help fix them when they do.

Sure, Go could introduce breaking changes in 2.x. Will they? Track record says "no".


As far as I can tell, the only reason for a Go 2 (as opposed to a Go 1.nnnn) is "we introduced a breaking change". Things that were considered "Go 2" are being worked in, as the language evolves, without the need to break previously written code.


Maybe Lua? Started in 1993 and LuaJIT is effectively frozen. I could see this one hanging in for a long time as the goto embedded language.


Interestingly, Perl doesn't get a lot of love on HackerNews but I would classify it in the family of "eternal languages."


Even after the Perl 5 / Perl 6 / Raku debacle? Many developers left after that (Python pulled off the same trick a decade later.. apparently some language teams have very short memories.)


Said debacle arguably reinforced Perl 5's eternity, if anything.


Especially after de Perl5/6 debacle. The two separate languages solution let them have fun on Raku, all the while ensuring that Perl 5/7/... will preserve retrocompatibility.


Some of that code has been running for a long time. A lot of the Perl toolchain still works on 5.8.1 (released Sept 2003).

https://metacpan.org/release/JHI/perl-5.8.1

Test::Simple still targets 5.6.2, which looks to have been released around the same time (November 2003).

https://metacpan.org/release/RGARCIA/perl-5.6.2


I have quite a bit of personal scripting in perl that I wrote in the early 90s, still in use every week. Haven't changed any of the code in decades.


C/C++ languages are stable, but building actual software with them is extremely cumbersome on future systems. It is unlikely to "just work" because of how we have inverted the dependency structure of the vast majority of C/C++ programs.


Ehhh. Integrating an old C++ library into a new project may be difficult due to build systems or not useful due to dependencies.

However if you want to take an old C++ game written in 2004 and ship it on modern platforms and consoles you only need to update a very small amount of platform specific code.

Updating a 20 year old C++ project is probably easier than updating a 3 year old web app.


Tbf if you’re just looking to run an app on newer platforms you’d likely not need to do anything with the web app. They have a lot of issues but once stuff works it’s usually a very long time before it doesn’t.

It would probably still be harder to add a new dependency to a 3 year old web app than it would be to integrate a 20yo C++ project though, I agree with you there.


imo that would be a "forever program" not really a "forever language." The lack of standard packaging and terrible paradigm of shared dependencies has made C/C++ incredibly fragile and non-portable when you need something from years ago to compile today.


JavaScript probably qualifies, too. It's not going anywhere for a very long time, if ever. Same with COBOL. COBOL will have to be killed with fire.

I'm assuming here that "eternal language" doesn't necessarily mean "good language". :-)


Unfortunately JavaScript code is often built on DOM and NPM quicksand.


How do you lose Python 2 code? The old interpreter still works.


But is no longer viable option for new work, and in this case a lot of the code IIRC was libraries for writing programs.


> "Eternal Language"

Even more than being able to compile and run it in a few decades (which is awesome) there is also the part that such eternal language ecosystems grow excellent tooling (debuggers, tracing, performance analysis, source tooling, build tooling, library maturity, etc) thanks to the stable platform and all the years dedicated to making it all ever better.

I greatly dislike the language of the year hype not so much because the language change itself, but because all the wonderful mature tooling doesn't exist in the new shiny thing so we're back to debugging via print and performance analysis via guesswork.


The only place LISP is eternal is on HN. A super-microscopic minority of engineers use it for real-world software development. The standard hasn't been updated for 2 decades. There is only one real compiler implementation. No WASM. No proper mobile support. Library support for real world tasks is abysmal.

Common LISP's tomb is indeed shiny and eternal. Developers at HN will mourn at its graveyard for eternity.


> There is only one real compiler implementation

what kind of meaning of 'real' reduces the several compiler implementations to just one?

Example: I use two native code compilers (SBCL and LispWorks) on my Mac. Both are available on other platforms, too. Is one of those not 'real'?

What about, say, ECL or Allegro CL? Are they unreal?


I honestly think most languages belong to this category. I can't really think of a language that changed so much that old code no longer works (other than python 2 -> python 3).


was it you that said that the difference between writing templates in c++ and writing macros in common lisp is like the difference between filling out tax forms and writing poetry :)


To see the REPL and debugger advantages of Common Lisp over other Lisps, Python, Haskell etc., you really should try for yourself in SLIME or SLY (a SLIME fork) imo.

But anyway, let's say I made an error somewhere and the program halts, putting me in the debugger. I can then examine each frame preceding the error. I can see every value going into my functions and jump around in the code generating those values. I change the code, while the debugger is still running, recompile it into the halted program and then pick a frame point right before the error occurred, to test if my changed code solves the problem. If it doesn't, I'll examine some more, perhaps start a second instance of the REPL to experiment a bit, then try another change in my code. All the while, I don't lose state. Once the problem is solved, my program will continue where it was as if the problem never happened.

For something like game programming, where you may be deep in the game and have encountered some rare combination of circumstances, this is simply invaluable.


REPLs are great, and I always prefer a language with one than one that doesn’t, but being able to patch a running system is Lisp’s killer feature in my opinion. I’ve never seen another language that support that.

I had a bug in a long running program that would only show up after like six hours. I couldn’t replicate it in isolation because it had something to do with how state was being maintained. I set a conditional breakpoint right before the crash, inspected the stack, patch the function, and then had it pick back up by reëvaluating the current function call.

Damn thing just worked. It was amazing, and saved me so much time.


Smalltalk is designed around this functionality. In fact the whole Smalltalk system is a live instance that has been modified like that since development started.


Smalltalk is seriously awesome. I think it has a slight advantage over lisp in its fix-the-airplane-while-it-is-flying-at-3000 feet property.


How are the two not exactly the same in that respect?


Python programmer not accustomed to a "real" REPL. My closest experience would be Jupyter notebooks, where dangling state is more a liability than an asset.

> ...inspected the stack, patch the function, and then had it pick back up...

After you edit the live state of the program, how do you translate that into code sitting in source control? Do you save this blob of memory and pass that down through generations?


1. Stop the system.

2. Modify the source code containing the function.

2a. Save the file.

3. Copy function definition to the debugger’s repl, and evaluate it.

4. Resume the system from frame that calls the function.

But yes, you must be diligent about making sure the source files contain the code that is actually running, but that’s not much different than artifact tracking.


Usually one wants to make the change persistent in the sources.

Step a): One just evaluates from the sources, while changing them. Once done -> save sources.

Step b): Create a patch file, which changes the image. Such a patch file can be loaded while starting an image, until one decides to save a new image with the changes already loaded.

For example there is a new release for the commercial LispWorks IDE once every one or two years. Users usually get only patch files for bug fixes during the one or two years maintenance. Thus I have a directory for patches to LispWorks, which are loaded when I start the base image.


One way would be to make the change in the source file and then recompile/reload that one function. This is pretty easy to do with SLIME; you can highlight the snippet of code you want to evaluate, and then send that to your program to be evaluated on-the-fly.


This is one of the fundamental features of the BEAM (the Erlang/Elixir virtual machine) as well.


Almost all Smalltalks, especially the image based ones support dynamic run-time patches.


Where would you recommend getting started with Smalltalk on Linux today?


Squeak or Pharo if you want the full image-based experience. Some distros come with Squeak but the packaging can be confused and the versions are often old. A slow and slightly janky but surprisingly useable Squeak experience can be had in a web browser at https://squeak.js.org/run/ . Older Squeak versions tend to run faster in SqueakJS.

There are tutorials all over the place however as Smalltalk is a system that, like Lisp, has dialects, they tend to be system specific. Lots of material at http://stephane.ducasse.free.fr/FreeBooks.html .


jdougan already mentioned Pharo. There's a MOOC here : https://mooc.pharo.org/


FORTH (indirect threaded implementations at least) allows to redefine words while the system is running (fingers crossed).

Just because that feature is available and at one time was desirable (when machines were so slow, that compiling would break the flow), doesn't mean one needs to or should use it today.


Compiling can certainly still break the flow depending on the language and size of the project. Just ask Google.


I believe .NET 6 allows hot reloading, and Erlang already has that feature too


You can actually do something similar (much more limited, it sounds like) on Java, of all things.

https://devblogs.microsoft.com/java/hot-code-replacement-for...

As long as it doesn't change the class signature, you can happily edit-save-replace code, and it jumps to the top of the method you're editing. It's basically instant, so you don't suffer the javac and JVM startup cost.

I don't think I can do the same in more dynamic languages, like Ruby or Python. Or, maybe, my IDEs don't support it (IntelliJ suite).


That’s really cool to see. For Ruby, pry is usually the REPL to use for hot reloads. https://www.cbui.dev/an-introduction-to-pry-a-ruby-repl/


I hear this talked about quite a bit, but it still doesn't quite make sense to me why it's such a big deal: I know this situation where debugging-fix attempt loop is far more efficient if you don't lose state, but it's nowhere near a pervasive situation: when it comes up, I can generally write something quickly to re-create the necessary parts of the state automatically.

If I was doing that every day, or even every week, I could see it being a huge advantage to incorporate a solution in the language—but at least for me it just doesn't appear to be an obstacle that often (incidentally my background was initially in game programming, too).


> I can generally write something quickly to re-create the necessary parts of the state automatically.

I think this is where the trip up is. In a game, for example, that state is often exceptionally complex and getting everything right back to where you came from is usually only possible running the full game and recreating the same state.


It just doesn’t come up that often though.

In most cases it’s not hard to narrow down which aspects of state are relevant; you really only need to preserve everything for rare exceedingly subtle/deep bugs, which is a special case not a general usage kinda thing—and yet this feature is often discussed as revolutionary for programming in general.


I wonder what it is that prevents e.g. python from having a similar “persistent state in development” tool/repl? I’ve never really used lisp other than a MAL I did a few years ago for kicks so I’m not well versed in its dev tooling.



You can definitely do something similar with e.g. ipdb

  from ipdb import launch_ipdb_on_exception
  with launch_ipdb_on_exception():
    foo()


Ha! Very interesting, Thanks for feedback. The number of times I restarted a game after 1 line change + recompilation..


Having dealt with Angular dev server restarting for the tiniest code change, this sounds like a dream.


They seem to offer open source (I think) product Nyxt browser[0]. They offer some interesting features. One is Lossless tree history, I can see it being very useful!

https://nyxt.atlas.engineer/

Edit: Right now it is sponsored by EU grant! Maybe finally EU can do something useful


I assume you think GDPR is useless?


No, it's actively harmful.


I wasn't expecting this :) Care to elaborate?


Web usability for me has suffered a lot, and I am not aware of any benefits.


I'm fine as long as usability suffers in favor of privacy. Which is the case.

Still, I'm not gonna walk away from a perspective to get both usability and privacy together. What are your opinions on the matter?


There seem to be blog posts on "why we chose language X" nearly every week on HN and most often X tends to be a language that is not widely used.

How do people manage to get any real work done using languages that do not have large and thriving communities that can build and maintain various libraries?

For example, I mainly do applied ML work in bioinformatics. There really are no ecosystems that come close to those of Python and R in terms of their completeness for data science and machine learning libraries and infrastructure. I've looked at other libraries in other languages, hoping to switch, but since there are massive gaps, it's often a choice between building supporting libraries OR actually getting my work done.

I'm truly surprised anyone can build much of anything complex in languages with few libraries.


Common Lisp has libraries that are 30 years old that still work. The community seems to be (slowly) growing right now. The core language supports some fairly complex functionality out of the box. There are a number of widely used libraries that add to it with things it is missing due to age such as threading, atomics, and crypography.


I feel like if you considered an average developer most people are writing API-like services that talk to a database and a few various APIs, payments, email, notification services, etc. ML is kind of a niche area where Python definitely excels and almost stands alone, but web frameworks are a dime a dozen, and every language pretty much has a postgres or mongo client.

"Much of anything complex" probably wouldn't hold true for most developers...I mean, it's all complex given technology, but 99% glue code that can be written in anything. That's generally true for me at least.

I have been bitten by lack of libraries for things like Cassandra in the past and switched languages because of it, but that's the exception and not the norm for me, and you really have to choose the right language for some projects. Or have some kind of architecture where you can have different parts in different languages.


Lisp is very good even if you write all your code into text files and invoke a build command before running it, which you do in a completely fresh image.

If my only way of debugging were to upload the compiled image onto an embedded target and observe the effects of print statements a serial log, I would still overwhelmingly want to be using Lisp.


I find it interesting that I can re-write this paragraph from the article and replace Lisp with Forth and it is still true.

"Forth takes a far different and far more pragmatic approach. The Forth designers do not assume what syntax, features or functions will be necessary in the future. The developers of Forth give you the full powers that they had to develop the language. You can develop macros for dynamically generating code. Without going into depth about how this mechanism works, a feat that was possible in Forth was the implementation of an object oriented programming system without having to change the compiler!"

It still seems to be a challenge to use programmable languages for day-to-day work despite how much fun they make programming for the programmer


> procedural, stack-oriented, Reverse Polish Notation

Lisp sounds nicer.


It has merits for sure. It is higher level out of the box. Forth is low level by design for hardware level coding.

However as the paragraph states, both begin at a level X but both do not stay at that level in the hands of one who understands how to use them. When used as designed, one does not program in these languages. You program in the language you create to solve the problem at hand.

An example at the extreme end of this idea is CoSY (Armstrong), an APL-like language written in Forth. A small example is a useable OOP extension written in ten lines. (Paysan)

My un-spoken point was that two independent language writers (McCarthy & Moore) found that similar ideas, expressed in that paragraph, implemented in radically different ways, improved their coding productivity.

It begs the question is their a lesson here for programming? It could be that IRL these concepts just don't work well or that they only suite a small subset of real world problems or a small subset of human brains, but it makes me wonder...


Modern systems are so reliant on extensive third party library support for things like database, web, network, and graphics library support I’m not sure the futureproofing argument matters.


I definitely think it matters. You’re right, the environment changes. It’s certainly an exercise in software engineering to modularize your application to have portable and non-portable parts. I like Lisp had treated this subject well; there are tons of libraries in Lisp that have been untouched for over a decade but still work fine.


Any hope of the Lisp community pulling a Swift and replacing the language with a drop-in-compatible new one built ground-up with lessons and sensibilities learned over decades?



SICL is still an implementation of Common Lisp, and not of a new programming language (give or take some additional features, such as first-class global environments). That said, there is some overlap between the authors of SICL and the authors of Well Specified Common Lisp <https://github.com/s-expressionists/wscl>; but WSCL only really defines some undefined and contradictory behaviour in the ANSI Common Lisp specification.


I have become more and more convinced that unspecified behavior in a standard is a bad thing. No, giving compiler implementers room to exploit variant behaviors in the name of optimization is generally not a good idea. I understand the political reasons for it, but that doesn't make it good.


Having never used Common Lisp, it's one of the languages I wish I've used for a long time.

However, working with Clojure and Scheme, I understand the power of repl driven development that is difficult to explain outside the experience of it.


> However, working with Clojure and Scheme, I understand the power of repl driven development that is difficult to explain outside the experience of it.

The only lisp I've dabbled with was Racket and I found the repl driven development to be a frustrating necessity. I would spend a lot of time trying to figure out the actual type of the thing that I a given function (even a stdlib function) needed, and futzing around in the repl seemed to be the fastest way to do it, but it was still quite slow.

It reminded me a lot of my extensive experience with Python where things that are easy in, say, Go, are quite hard. People rave about their repl, but it feels like they're comparing it to a script iteration loop rather than static analysis tooling.

That said, I completely buy the argument that Python's REPL is particularly bad and that there are other use cases where REPL-driven development shines. My mind is open, but these are the sorts of experiences the pro-REPL folk should be prepared to engage with in their evangelism. :)


Scheme is remarkably less interactive than Common Lisp, though specifics depend on dialect of Scheme.

Common Lisp is remarkably more geared towards interactive use, but while it does make it easy most of the time to be used with just minimal REPL - as in literal (loop (print (eval (read)))) - it's best to use integrated, enhanced environments like SLIME/SLY/SLIMV/etc or one of the IDEs (LispWorks, AllegroCL, Clozure CL IDE)


> People rave about their repl, but it feels like they're comparing it to a script iteration loop rather than static analysis tooling.

You do know that statically typed languages have REPLs too? Like the ML family, including Haskell.

And when using something like a Jupyter notebook with a kernel for your compiled language https://github.com/gopherdata/gophernotes you can do similar interactive programming.

REPLs are about trying out ideas or changes.

Lisp REPLs take that a step further, as you interact with and in your whole actually running program.


> Lisp REPLs take that a step further, as you interact with and in your whole actually running program

With python, I can add a signal callback which calls the inbuilt breakpoint() function. This means I can send my application a given signal, let's say SIGTERM, and it'll drop into they python debugger whereby I can then drop further into the python REPL.

Does Lisp provide more than that out of interest?

Edit: ah I see hot reloading is a thing. Not explored that in python.


Many early Common Lisp systems had impressive error handling. Certain exceptions would automatically break to an interactive Lisp window.

But Common Lisp also has a fancy system for recovering from exceptions. So after you replaced the broken function, you'd often see a list of "restart" points from where you could retry a failed operation.

So even if you failed in the middle of a complex operation, you could test out a fix without needing to abandon the entire operation in progress.

I'm not entirely certain if this was as useful as it sounds, in practice, but it was certainly very impressive.


There was a book published just last year about the CL condition system for anyone who wants to better understand it (disclaimer: it's sitting on my desk but I haven't read it yet): https://link.springer.com/book/10.1007/978-1-4842-6134-7


It's like CSI vs Dr House. Debugging in python is forensic; you can figure out why your program died, but you can't really fix it. In common lisp, debugging can be curative as well as forensic. If you hit a problem like you call foo from bar, but foo isn't defined you can just write an implementation in your editor, send it to the running lisp and continue with the current execution. If a class is missing something, you can change the definition and not onl y have it take effect for all live instances, but you can also say how to transform the existing instances in your program. Have a look at the example given in the "docs": http://clhs.lisp.se/Body/f_upda_1.htm; you should get the gist even without knowing lisp.

I'm overgeneralizing (I've had remote repls in production python application and used them to hotfix things), but only a little. You will appreciate the difference if you work with long running computations, where you don't want to have to start over from scratch if something went wrong right at the end.


Yes, it was mentioned in the post: you normally get prompted not just when there's a breakpoint, but when your code calls something that doesn't yet exist, for example, or try to access a non-defined variable [1]... you have the option to "fix" the issue and continue from the same point, continue with a different value, or just let it crash... and there's stuff like profiling and de-compiling [2] which are all available directly from the REPL as well.

[1] https://lispcookbook.github.io/cl-cookbook/debugging.html

[2] https://lispcookbook.github.io/cl-cookbook/performance.html


> You do know that statically typed languages have REPLs too? Like the ML family, including Haskell.

I do, but that I don't see how that relates to the bit of my post which you've quoted. I certainly didn't claim or imply that REPL and static type systems were mutually exclusive, only that REPLs are a poor substitute for many static analysis tasks.

> And when using something like a Jupyter notebook with a kernel for your compiled language https://github.com/gopherdata/gophernotes you can do similar interactive programming.

Yeah, I'm aware. I operate a large JupyterHub cluster (among many other things) at work. :)

> Lisp REPLs take that a step further, as you interact with and in your whole actually running program.

That sounds nice, but it's too abstract to persuade IMHO. Personally, it seems like REPLs can make certain feedback loops a bit faster, and to that extent they seem like a modest quality of life improvement; however, considering how lisp folks positively rave about them, I assume I'm missing something.


> that REPLs are a poor substitute for many static analysis tasks.

Maybe I should have asked you what you use the REPLs of statically typed languages for.

I actually never used REPLs for that (neither in Python, CL, Clojurescript nor Elixir).

> That sounds nice, but it's too abstract to persuade IMHO.

Of course it is. I also didn't get it until I've tried it myself. That actually has been the main cause why I tried (Common) Lisp at all (well I had been using Emacs Lisp before, but that doesn't really count ;).


> Maybe I should have asked you what you use the REPLs of statically typed languages for.

I don't really use them, which is kind of my point. Once in a great while I'll do some quick testing in the rare case that using a REPL is actually faster than just running the program, but to even call that a marginal improvement in my quality of life would risk overemphasis.


I use repls a lot, both when I used clojure and now when I mainly use python. The reason for me is language exploration. I write my program and don’t always know how to do something and the internet is ambiguous.

With a nice repl I can rapidly iterate trying different things and doing micro in-situ experiments to explore and understand how the language (or library more frequently) works.

It’s very edifying.


> I assume I'm missing something.

You are indeed, Rackets REPL is very barebones. Use Common Lisp with either SLime or Sly and Emacs.


> You are indeed…

How puzzling that you do not spell-out exactly what you believe they are missing.


It should be obvious? He uses Racket, Racket has a very simple REPL, as opposed to CL which allows him to use the features he has heard mentioned about Lisp. Interactivity, short feedback loop, etc. This has been written thousands of times on here, I don't intend to repeat it here.


Are you saying the Racket REPL does not support "Interactivity, short feedback loop" ?


Not when compared to the CL REPL, yes.


> I would spend a lot of time trying to figure out the actual type of the thing that I a given function

That's just because your were a noob. You hadn't learned and internalized the standard library yet, and you hadn't learned to understand flow and structure of types and functions and variables.

I too when I started with Lisp had the same struggles, but they go away with expertise.

That's why you'll hear people talk about "training wheels", when you program in a statically typed language with an extensive IDE doing a lot of static analysis for you, you basically operate constantly with training wheels on. You never really learn what class does what, what methods they have, what types things are, you rely heavily on your IDE.

For example, ask an experienced Java developer to write some Java in notepad? They don't know what to import, where to find anything, completely lost, productivity hits zero.

And I'm not degrading Java devs or that setup. What happens in Lisp is that you have to learn your language, you can't rely on an IDE or the compiler. In theory you could also learn Java to that level, stop using an IDE and you'll be forced to really know the language as well.

Here's the kicker? Once you learn the language to that level, your productivity in it skyrockets! But until you do, its a slow crawl.

And it's only ounce you've got that experience that the ability to change the program as it is running through a REPL really brings about speed, you don't use it to figure basic things like types and what standard functions exists, but to tackle real business logic or explore libraries and 3rd party APIs to get a grasp on those quickly.

Take this from someone who went from strongly typed Java, C++, C#, ActionScript 3 to a Lisp. I'm now way more productive in Lisp.

P.S.: Also, you'll have static analysis in Lisp as well, depending on which one and your setup. In Clojure, I have static analysis, an IDE, a connected REPL, and use all three actively.


Come on now, this is a silly take with more than a little elitism.

An IDE (for a sufficiently static language) eliminates so much unnecessary mental load that is otherwise wasted on avoiding inevitable human errors—typos, type mismatches, forgotten cases.

I don’t want what is essentially trivia swimming around in my head any more than I want to find out about the inevitable errors I’ve made when I come to compile or run my program.

I want to know that no such issues exist well before then, so I can focus on making sure my program actually behaves correctly.


I'm sorry, I know it can come out that way, but I am a developer that uses both statically typed languages and happily use powerful IDEs with them, and I also develop in a Lisp (Clojure more specifically), almost to a perfect 50/50 split.

And I don't mean to say that you're a bad developer or not smart enough if you use an IDE. Of course you should use an IDE with Java or C# or C++. Those languages are all very good, and when paired with a good IDE can be quite productive. And even with a Lisp, you should and will want to use a powerful editor or IDE with good auto-complete, linting, jump to definitions, etc.

What I'm trying to say is that to be productive in a Lisp, it's a different learning curve.

You just can't come and say: "well I tried it a little bit, and I couldn't figure out how to do anything and I was really slow, so I just don't get the point of a REPL and I don't get why people claim it's a hugely productive tool."

I'm telling you why, it's like complaining that you type slower on a keyboard than you can write by hand for the first year of you using a keyboard. There's a learning curve.

You can't just say: "well I can't figure this out. Where is the IDE to help me with it? Where are the type definitions? Where is the compiler to guide me? I guess the Lispers are just full of it."

No, you didn't do what Lispers do. Lispers learn the language until those things are no longer a problem, and then they unlock a new found productivity combined with the ability to work inside a running program, quick iteration, being able to extend and mold the language to what is most efficient for you and your problem, etc.

It's 100% ok if you say that you don't feel it's necessary for you to go the extra mile to learn a Lisp properly. That's fine, if you find you're sufficiently successful and productive already with your language, that's totally cool. But you can't not properly learn it, and then claim that the claims of those who have are invalid.

It's like if you started to doubt a touch typist that's telling you that learning to touch type makes you a more productive typist, because when you tried it was hard to learn and you were slower at typing.


What would really help to persuade people that they'd be more productive in Lisp would be examples of software written in Lisp.

I was a teenage Lisp enthusiast and have seen these sorts of Lisp discussions go round and round in circles on online forums for the past couple of decades. Ultimately, the corpus of publicly available software written in Lisp just isn't that compelling.

It's often said (and may be true, for all I know) that there are lots of companies quietly using Lisp behind the scenes and reaping the benefits without shouting about it. But to be persuaded, people need to see results of the claimed productivity gains in the form of actually existing software.

It's also worth bearing in mind that there are lots of people (like me) who did give Lisp a serious try (Common Lisp in my case) and then moved on for various quite sensible reasons. Lisp enthusiasts sometimes speak as if anyone who doesn't love Lisp can't possibly have given it a thorough trying out, but I would question that assumption.


If you've tried it properly and didn't personally found it compelling that's ok. I've seen people for who their mental model of how they think and approach computer tasks doesn't land to it, or people who just have other personal preferences.

As for example of software made with a Lisp, I feel people are never happy with them, like a constantly moving goal post, but there's a certain chicken and egg to it.

Some examples:

    - HackerNews
    - Emacs
    - LispMachines (the entire OS)
    - AutoCad (uses Lips as a scripting language)
    - Jak and Daxter (the first game)
    - Yahoo Stores (pre-sale)
    - Reddit (pre-IPO)
    - Datomic (no-sql tuple DB)
    - The flight system in the Boeing 747 Next (their latest 747 as of this writing)
    - Apache Storm (the distributed stream processor)
    - Walmart (they use it for backend stuff)
    - CircleCI (build automation)
    - Riemann (a high scale distributed event stream publisher)
    - worldsingles network (bunch of dating websites)
    - NuBank (a Brazilian bank)
    - ConsumerReports (all their backend)
    - The Climate Corp (agriculture software)
    - Roam Research (note taking app)
    - Kevel (ad server as a service)
    - HighSpot (sales rep software)
    - Reify Health (health related software)
Now those aren't all Common Lisp per-say, but Lisp languages so Arc, Common Lips, Scheme, Clojure, EmacsLisp, etc.

There's also a lot of use of Lisp that's not like a closed software, a lot of times it'll be a backend service or parts of a system is using Lisp, I know Amazon, Microsoft and Netflix have some such use, it's not wide scale, but you'll find some teams there using some kind of Lisp, as you'll sometimes see job offers that mention it.


Sure, here’s our personal example: https://github.com/atlas-engineer/nyxt


Is there a good standard reference or bible about the language?

Something like C's K&R or Lua ones?? Its sooo handy to have a clear picture from the creators themselves.



A typical standard lib may have thousands upon thousands of functions with many less commonly used arguments and options. If you are saying "memorise the lot" is the only route to productivity on lisp then I'm out, because that's unrealistic. And let's not get into libraries outside of the standard lib, am I expected to memorise XML parsers, graphics APIs, and so on as well?

If you are saying "memorise what you commonly use" then that's fine, except that any typical program usually involves a lot of "lesser used" stuff as well.


You don't have to. Lisp development environments remember all what one loads into them: the definitions, their source location, their arguments, their documentation, who calls whom, the data types, ... One can ask Lisp about all that then.


> One can ask Lisp about all that then

How? As in what commands/functions/macros?

Say I define a function and compile with slime, how do I get the repl to print it back to me? And with a function I loaded from an external library? Is there something like macroexpand-1 for functions which will print it out?


M-. on a symbol brings you to the source code.

Common Lisp also has built in the function FUNCTION-LAMBDA-EXPRESSION, which returns the source for a function -> when Lisp has recorded its definition.

Try (DESCRIBE #'some-function) in SBCL to get information about the arglist, the types, the source file.

In a SLIME editor buffer try c-h m . This will display the buffer commands. There are a zillion of commands to get information about the symbols, code references, etc.

On my Lisp Machine:

    Command: Show Source Code (a defined function spec) mts::init-facts
    Source code for MTS::INIT-FACTS:

    (defun init-facts ()
      (setf *init-facts*
            '((world  (loc (actor joe)    (val cave)))
              (joe    (loc (actor joe)    (val cave)))
              (world  (loc (actor irving) (val oak-tree)))
              (irving (loc (actor irving) (val oak-tree)))
              (joe    (loc (actor irving) (val oak-tree)))
              (world  (loc (actor water)  (val river)))
              (joe    (loc (actor water)  (val river)))
              (world  (loc (actor honey)  (val elm-tree)))
              (irving (loc (actor honey)  (val elm-tree)))
              (world  (loc (actor worm)   (val ground)))
              (joe    (loc (actor worm)   (val ground)))
              (irving (loc (actor joe)    (val cave)))
              (world  (loc (actor fish)   (val river)))
              (irving (loc (actor fish)   (val river))))))

    Command: Show Callers (a symbol [default MTS::INIT-FACTS]) MTS::INIT-FACTS
    MTS::MICRO-TALESPIN calls MTS::INIT-FACTS as a function.
    MTS::MICRO-TALESPIN-DEMO calls MTS::INIT-FACTS as a function.
    Done.
    Command:


I'm referring to what you commonly use, but also I'm talking about the part of the standard library that involves more basic operations like creating collections, iterating them, defining and requiring modules, creating objects, looping, destructuring, literals, conditionals, data types, polymorphism, error handling, etc.

As opposed to say learning the standard library relating to MIDI access, or file IO, or other specialized things that in theory could be a library and aren't really core.

It might sound a little like syntax, but because Lisps have such a lean syntax, there's a lot of the fundamental constructs that are part of the standard library and they are more what I'm referring too.

For example you'll have things like: car, cdr, append, mapcar, some, every, symbols, keywords, loop, intern, quote, format, butlast, intersection, union, multiple-value-bind, apply, funcall, =, eq, eql, if, unless, typecase, cond, dolist, iterate, etc.

What often happens in Lisps is that each core function is very powerful and becomes almost a mini-DSL. So it takes a while to really learn about all those and have them memorized by heart so you don't constantly need to refer to the documentation about what they do and how they work and how to use them. And often their power comes when they are combined together as well, and that also takes some time to learn all the valid ways of combining them.

Before you are familiar with all that and it's second nature, almost instinctive, you'll be a lot slower to read and write code.


This was unnecessary, mate


> my extensive experience with Python where things that are easy in, say, Go, are quite hard

Can you give some examples? This is not something I've encountered (although I have much more experience with Python than with Go so there is probably plenty in Go that I have not encountered).

> the argument that Python's REPL is particularly bad

What argument is this? This is also not something I've encountered. I use Python's REPL all the time and it greatly speeds up development for me.


Yes, Python's REPL speeds up development, but Lisp's REPL (we should say image-based development model) is even more interactive! We never wait for a server to restart, our test data is live on the REPL, we compile our code function-by-function with a keystroke and we get instant warnings and errors, when there is an error we get an interactive debugger that pops up and allows us to inspect the stack, go to the buggy line, recompile, come back to the REPL, resume the execution from the point we want, objects are updated when a class definition changes and we can even control that… it's crazy. From time to time it becomes a mess and I restart the Lisp image, but it's about every two weeks :p I tried to explain more here: https://lisp-journey.gitlab.io/pythonvslisp/

Note that I'm not a 10x master, I use CL for mundane, real-world tasks (access to an FTP, parsing XML, DB access, showing products on a website, sending email with sendgrid…), my programs had 1 trivial bug, I am more productive and I have more fun… Just to say it has its uses, even today, even with "all the libs" of Python (and btw, see py4cl).


> We never wait for a server to restart

I'm not sure I understand. When I run Python's REPL, I'm running the interpreter on my local machine; I'm not talking to any server. The same would be true if I run the Lisp interpreter, but I don't see how that is any advantage for Lisp over Python.


Start your Python REPL, do your interactive development, measure your average time before aborting and reloading the python instance. How much time elapses? Minutes? Hours?

When I've done Common Lisp heavily, I've lived in the same instance between multiple days up to weeks. This feels perfectly natural.

Admittedly it is hard to justify why this difference is meaningful.


indeed, I was thinking web development for that example, but I'm talking to my local development server, not a remote one (although it's trivially possible with a swank connection). I often have to wait for Django in a non-trivial app. I often set a ipython breakpoint, try things out, continue, and iterate.

Anyways, do you develop your app when you are in the REPL? Or do you try things out in the REPL and write back what you found out in the files? If you found how to send changes from your editor to the REPL, then good, but it's not built-in, and it will lack the other things I mentioned.


> do you develop your app when you are in the REPL? Or do you try things out in the REPL and write back what you found out in the files?

Yes. :-) Both strategies can be useful. (And note that "app" does not necessarily mean "web app"; web applications are just one kind of application. There are lots of different kinds of applications. The REPL helps with all of them.)


sure. Then a web app is a clear example of Lisp's increased interactivity.


The increased interactivity appears to be due to your particular framework, which you have built using Lisp but which is not the same as just "Lisp". Similar frameworks have, I believe, been built using Python.


no, forget the web server, there is more, I promise.

> we compile our code function-by-function with a keystroke and we get instant warnings and errors, when there is an error we get an interactive debugger that pops up and allows us to inspect the stack, go to the buggy line, recompile, come back to the REPL, resume the execution from the point we want, objects are updated when a class definition changes and we can even control that.


The linked article of Mikel Evins further in the thread might give you more hints.


Other than the "breakloop" functionality, which I agree is (AFAIK) unique to Lisp interpreters, I don't see anything described there that can't be done in a Python REPL.


If you can't understand what people are trying to say, I recommend trying to experience it for yourself. Install Common Lisp along with Slime for Emacs, and try things out. Or try out a Smalltalk environment like Squeak or Pharo which would also give you the same "enlightening" experience.

I know you are skeptical because python has a repl too and so it is hard to see what these "live" environments like lisp/smalltalk are offering. But trust me (and the others) that there is more than what python offers.

I have written python professionally probably more than any other language I have used, so I'm not ignorant of python btw.

No one is trying to fool you. But you don't have to take our word, please try it out for yourself!


> If you can't understand what people are trying to say

I understand it just fine. I've used a Lisp REPL. I'm just pointing out that not all of the features of the Lisp REPL that are being mentioned are unique to Lisp.

> I know you are skeptical

I'm not "skeptical" at all about what can be done in a Lisp REPL. As I just said, I've used it. I have not claimed that a single thing anyone has said about the Lisp REPL is false.

I think you are reading things into my posts that I have not said.

> you don't have to take our word, please try it out for yourself

As I said above, I already have. Don't jump to conclusions.


> Lisp's REPL (we should say image-based development model)

This looks like something specific to your particular development workflow for one particular type of application (a web server). REPLs (for both Lisp and Python) are much, much, much more general than that.


Why do you think the Python REPL is particular bad? What would you do to improve it? I think it works pretty well, but maybe I'm just used to it.


Last I checked you couldn’t await inside a REPL and various other idiosyncrasies. Lots of small paper cuts.


Code parsing in the Python REPL is different from that in Python source code files. Not MUCH different, but different.

Specifically, if you try to paste a function definition that has blank lines in it, into the REPL, the first blank line will be interpreted as "end of function" and the rest of the function will become incorrectly indented garbage.


You can’t even go into a namespace and change functions there


> I understand the power of repl driven development that is difficult to explain outside the experience of it.

I've been having trouble with this as well. I think we'd do best in removing "REPL" from any explainations, and instead use the words "Interactive Development", "Dynamic Development", "Hot Reload on Steroids" or something similar, as when I use terms like that, people understand it's nothing like the "repls" ("shells" really) other languages provides.


The REPL is the nucleus of interactivity in Lisp.

The first Lisp, called LISP I and its successors, from John McCarthy and his team had a remarkable collection of innovative technology:

* a reader for symbolic expressions. These are nested lists of symbols and other data objects like numbers and strings

* a printer for symbolic expressions.

* an evaluator for symbolic expressions. Each symbolic expression evaluates to a value. Side effects to the running Lisp could be added variables, data or function definitions.

* an automatic garbage collector taking care of freeing no longer used memory

* piece-by-piece allocation of data

* a form of programming with functions, also using recursion

* a way for Lisp to store and restore the main memory heap to/from external storage

* Lisp compiler written in Lisp, which generates assembler, and an assembler which creates code into the runtime -> the first interactive incremental to memory compiler

* a way to program with Lisp code generated by Lisp macros

It had a batch REPL. This Read-Eval-Print-Loop would read a file of expressions, expression by expression, evaluate them and print each return value into a file. It used the above functions READ, EVAL and PRINT.

Lisp was starting up and restoring a default image as the heap. It would then execute the file via the batch REPL and save a new, changed image as the new default image. Next time Lisp starts, it would then restore the latest image.

A next step in the evolution was then to connect a terminal and let the user enter a Lisp expression. Lisp reads it, evaluates it and prints the value back. Side effects again might change the running Lisp. The REPL then waits for the next terminal input. Code could also be loaded from files.

That way the REPL was not some interface which the debugger brings up on request. It was the main interface for the programmer to build up large Lisp systems piece by piece.

We are talking about the early 60s here.

In the coming decades the idea of a REPL morphed into extensive interactive resident development environments like Interlisp <http://interlisp.org>. Interlisp combined the REPL with managed source code, structure editing for code and data, automatic error correction of code, undo of evaluation effects, a virtual machine, ... and a lot more. Here Lisp, the IDE and the programs under development were running in one Lisp runtime. We are talking about the early 70s.

Interlisp moved at Xerox PARC onto their new graphical workstations. Thus the REPL-based development environment moved to graphical user interfaces. The Interlisp team got an ACM Award for their pioneering work.

The MIT AI Lab took that idea to their graphical Lisp Machines.

Today in 2021 Lisp programmers often use SBCL as the implementation, GNU Emacs and SLIME (or similar) as their development environment. It's still usual to develop a running program, having potential multiple REPLs into the program, using Emacs/SLIME to send code for remote evaluation to the running Lisp.

GNU Emacs itself is a Lisp program, too - with its own integrated Lisp and the development tools for it. GNU Emacs has its own REPL, called IELM, the interaction mode for Emacs Lisp.

Even in Lisp not all development interactions goes through a REPL, but there are still a few very powerful ones, like those of Allegro CL, LispWorks, McCLIM, Interlisp (recently open sourced) or the Lisp Listener from Symbolics Genera. Typical is the rich interactive error handling in multilevel break loops with restarts and the ability to continue from errors. The above also support image-based development and integrate Lisp, IDE and the software under development in one program.


or, as I've been suggested, "image-based development" (even though a CL implementation must not be image-based, but most are).


There are plenty of lisps that offer these benefits that aren’t image based, clojure for example or emacs lisp


That happens with languages in the Smalltalk family too, like Pharo. It's hard to convey the convenience of the Transcript window and the while environment without actually showing ir experiencing it.


But repl workflows are not special to Clojure and Scheme. People in JS set breakpoints in their IDE, pause at a point in their script to interactively explore program state all the time.

Is this not the same?


Not really. With Clojure you don't even have to leave your editor (I use Emacs with the fantastic CIDER package). You can put your cursor over an S-expression, hit two keys, and evaluate and see the results of the S-expression directly within the text buffer. The feedback loop is so quick it's basically instant - even tabbing over to a browser or terminal window feels lethargic in comparison.

I haven't used Common Lisp much, but I've seen a few really impressive demos of debugging in its available REPLs. This is a longer one, but a good tour of SLIME (also an Emacs package): https://youtu.be/_B_4vhsmRRI


In PhpStorm I hit a hotkey to set a break point, then a hotkey to make the API call to that code and immediately get an xdebug breakdown of the entire state of the program at that point. It’s almost instantaneous and I don’t have to switch windows. I would say that’s pretty close, although the upfront learning and config to get that working means it is higher overhead overall.


For context I use phpstorm and Clojure

Here is a video I recorded which hopefully demonstrates just how much faster you can go in Clojure

https://youtu.be/L0af0bc5Jec

Like I think a parent comment said it's hard to explain but once you get it, it's fun malable and exploratory

Intellji also has a debugger that works with Clojure for when it's appropriate

In php you can't redefine a function at runtime without something like runkit, in Clojure you can fix your systems at runtime


It really isn’t and I suggest you give clojure an honest try to see why. It’s really hard to explain.


Part of it is cultural I think. In Clojure you'll often find yourself building your app incrementally, sending individual functions to the repl from your editor. I don't think there's any reason why you couldn't do that in Javascript.

Common Lisp takes this to another level though with stuff like restarts. Where the app will throw an exception during development (perhaps because a function isn't implemented yet) and instead of crashing you'll end up in the repl where you can make some fixes and keep running the app.


There is a nice little story in the book Practical Common Lisp (Peter Seibel, 2005). Quoting it below:

An even more impressive instance of remote debugging occurred on NASA's 1998 Deep Space 1 mission. A half year after the space craft launched, a bit of Lisp code was going to control the spacecraft for two days while conducting a sequence of experiments. Unfortunately, a subtle race condition in the code had escaped detection during ground testing and was already in space. When the bug manifested in the wild--100 million miles away from Earth--the team was able to diagnose and fix the running code, allowing the experiments to complete. One of the programmers described it as follows:

Debugging a program running on a $100M piece of hardware that is 100 million miles away is an interesting experience. Having a read-eval-print loop running on the spacecraft proved invaluable in finding and fixing the problem.


I've heard this before and it does sound like the most epic hack in the history of programming.


Recently there was a bit of a fuss when people discovered the Mars Ingenuity helicopter runs python3. Now I wonder, if its software ever crashes, will people be able to just launch the python REPL and debug It?


That is the thing about FP in general. Data is the center. State is just data. Program is just data. Capture data, feed it locally - get results.


> 100 million miles

That looks like about 9 light minutes away, so about 18 minute latency?

It might be super epic, but doesn't sound super nice :)


After my Clojure conversion and baptism, I've taken to writing Python apps in a similar style using Jupyter notebooks, and find it to be a tolerable middle ground. Especially if the Python is functional style with good referential transparency.

I.e. in each notebook cell, prototype and exercise your function in isolation. Then copy them over the real py module.

Still, I miss the elegant data literals in Clojure whenever I'm slogging through some literal Dataclass objects for testing..


I used to do this in Jupyter, but switched to vscode. If you have comments that look like:

    # %%
It will treat the following chunk of code like like a Jupyter cell and provide inline ui to run it. It’s almost the same thing as a notebook, but no markdown and your file is just a plain python script.


Nice! I've seen a mention of that here and there, but haven't given it a proper shot yet.


It is not quite the same. I'd argue that, while browser environments are very good, they're trying to solve a slightly different problem, and the overlap in capabilities is less than complete. Lispers sometimes say that the browser devtools experience is about halfway toward the power of a good Lisp environment, but I would argue that the converse is also true.


I'm also curious, I think what it comes down to is that Clojure lets you change the code of a running application, rather than reload it.


You can do that with C# in Visual Studio though.


The greater Clojure hot-reload super power comes from embracing the idiomatic division of state and function though. Hot reload in C# will have probably have to re-instantiate classes or restart the app. In clj, if your application state is kept in a global atom a'la (defonce app-state (atom {})) Then redefining your functions won't touch your data.

The only reason to do a full restart would be if you changed the data structure expected by the functions in a way that's not semantically backwards compatible.

Big-whoop on a stateless webserver, maybe, but on the front end using ClojureScript for an SPA it is nothing short of magical and has ruined any other browser dev work flow for me.


...and Common Lisp and Smalltalk take it a step further. You can go ahead and change the data structure in a way that's semantically incompatible. The runtime will notice that the definition has changed and either automatically reinitialize any existing instances to conform to the new definition, or, if it doesn't know how to do that, pose a UI asking you to tell it how.

And when control reaches a function that expects the old kind of object and runs into an error, it'll enter a breakloop that you can use to modify the now-out-of-date function to handle the new definition properly. Once you've redefined the function you can then tell the runtime to resume and it will proceed as if the function you called had originally had the new definition.



>Is this not the same?

not even close, no


I attempted to answer the same question with a project after being asked again and again at work.

https://github.com/codr7/whirlisp


Quick question while we are all talking about lisps - any recommendations for a scheme-based web development repl? I’ve tried Racket and their framework, and while the documentation is great the one thing I really miss is that you have to do restarts every time you make an update.


You can use racket-mode to run a Racket REPL in Emacs, and it works the way you want. I use Emacs for all my Racket work, and it's a nice experience.


Lisp is a linked list, where first item describes what to do with rest of list. And with infinite memory.

There is also very particular textual representation of Lisp.

But there could be others.



Repl? What modern language does not have a shell?


The repl in Common Lisp is a persistent image that's actually running the program you're working on. You can inject functions, packages, change packages, debug, etc, all directly from the environment running the web server without restarting. It's more interactive development combined with hot reloading as opposed to a repl from other languages (though it does have a repl as well)


And you can do things like redefine classes and all the extant instances of that class (or its subclasses) automatically are updated.

The general workflow is: compile and load your program and tests, run the tests, change the code and tests, recompile and reload changed files (using an internal dependency manager that is something like a make system), and continue.

The focus on the repl is a bit misleading, since all the changes are off in files. I mean, you CAN type in a new function definition to the repl, but there's not much point.


Is that it? Future-proof and REPL?

R code from the 90s still runs today on the latest versions and everyone uses a REPL to write their code. I don't see a lot of people jumping into R any time soon, quite the opposite in fact.


R is not trivially parsable and manipulated, since the code is not just a list. I think that’s the real magic of lisp.


Accessing the code part and modifying during execution is actually rather easy in R. Functions such as substitute or quote are commonly used and a language expression in R is a list that you can edit and evaluate back.

The internals itself in R actually sound quite lispy: both values and codes are SEXP (S-expressions).

R is inspired by Scheme and Common Lisp; it has first-class functions, multiple dispatch.

https://www.stat.auckland.ac.nz/~ihaka/downloads/Interface98...

"This decision, more than anything else, has driven the direction that R development has taken. As noted above, there are quite strong similarities between Scheme and S, and the adoption of the S syntax for our interpreter produced something which “felt” remarkably close to S."


Interesting. So it could be said that R is kind of a bastard child by Lisp & APL?


I think homiconicity is the magic of LISP. Ease-of-parsing is not a quality worth optimizing for in a language


I agree with your first sentence; and disagree with the second.

I parse code, every time I read source. Ease-of-parsing is absolutely a quality worth optimizing for. (The difference is, ease for who? Optimize for a human parser, not a machine parser.)


I know I will not be liked for that, but optimizing for a human parser is definitely not what LISPs are doing — we can adapt and “pattern match” the many parens but the thing is, we are better adapted for more text-like programming languages (though if someone were to say it is cultural, I would have to agree).


Doesn't seem like it will prevent you from doing Language Oriented Programming, or advanced embedded Domain Specific Languages. Perhaps it's less ergonomic, I am not an R user.

http://adv-r.had.co.nz/dsl.html


Maybe that could have been mentioned in a "why lisp" article.


[flagged]


This sort of criticism is bar-none the most typical negative criticism any time someone describes a benefit of a language, especially an unfamiliar one like Lisp.

You give code, “this is way too detailed, what’s the essence?” or “how is a for-loop macro special at all? we have for-loops already”

You give small simple snippets, “not real world!”

You don’t give code, “not enough detail, just feel-good marketing!”

You give pseudocode, “well I want to see the real thing!”

The concepts presented in the article are hardly about code anyway! They’re about a development workflow and a language environment supporting it. How should the author demonstrate interactive development through a paragraph of text, besides just describing it? You really need a live demo to see how everything moves around, not a code snippet.

My suggestion to you as a critic is to actually ask, in good faith, for something substantive you want to see. For instance, “I wish $AUTHOR showed us what they mean about removing a class slot.” Then the author, or others, would be happy to respond to you.


We're talking about programming a lot on HN, and Lisp is one of more familiar things in programming, so sometimes assumptions are too high, I guess.


> Lisp is one of more familiar things in programming

Simply not true in practice.

https://insights.stackoverflow.com/survey/2021#section-most-...

edit: fixed link, header was pointing at wrong section


Clojure is top of that list and Common Lisp is just below Go. Not sure what your point is...

Even if people don't touch Lisp they know it's that strange language with all the parentheses...


Are you sure you're looking at the main list, for all respondents?

After counting a bit, Clojure seems to be in 31st place and Common Lisp isn't even on the list.

Clojure is lower than even languages which are not considered popular, for example Powershell or Dart.

https://insights.stackoverflow.com/survey/2021#section-most-...

Maybe your browser didn't jump to the correct section of the survey?


He changed the link since I made my comment. See his edit. Originally he was pointing to developer pay by language, which Clojure tops and Lisp is fairly high on.


Yeah if you click the header hypertext (not link icon) in my corrected link, it misdirects to another section. Sorry about that.


Clojure is no where near top, nor near go?

  Go  9.55%
  ... about a dozen entries ... 
  Clojure 1.88%
  Elixir  1.74%
  LISP  1.33%


See one of my previous responses. Link was changed to highlight a different stat.


i think the parent is well aware of the unpopularity of lisp. yet that does nothing to disprove that lisp is one of the most 'familiar things in programming'. almost all programmers who have attended some academic course on programming languages know about lisp


Perhaps true for past generations.

Even MIT recently rewrote SICP in Javascript for its undergrads, IIRC.


MIT also switched from Scheme to Python for undergraduate CS programming courses. They did it for the libraries and ease of use.


And because the goal of 6.01 was different from 6.001, which was the biggest thing in the change. A graduate of 6.001 was supposed to be on path to understand computation et al - a graduate of 6.01 is given enough skills to apply a bit of programming necessary for his next courses but isn't expected to actually grok programming, just string enough libraries together.


im definitely not part of the past generation, but i get your point nonetheless. cs courses now are more focused on creating industry-ready developers than they are on creating solid academic programmers. in this sense cs courses now are much more vocational than they are academic. if we can agree on this difference between developers and programmers, then i still agree with the familiarity statement by the parent

also by the way, the recently released follow-up to SICP [0] is still in very much in lisp, MIT Scheme to be specific (like the original SICP). however, this book is far from an introductory text in software engineering

[0] https://mitpress.mit.edu/books/software-design-flexibility


Not really, unless you go to a very small subset of top notch universities, in some geographies.


Here Lisp is #30 higher than Haskell, Julia, Clojure, D, Ada, Scala, Erlang, Elixir, F#, OCaml… https://www.tiobe.com/tiobe-index/ (yes that index measures nothing meaningful but maybe at least a "familiar thing") (edit) and just below Rust which is #29 ahah)


If you look at modern, general-purpose languages in order, you get something more like

Python, C, Java, C++, C#, VB.net, JS, PHP, Groovy, Ruby, Swift, Perl, Go, Lua, Rust, Lisp

That would put it at 14 to 16 depending on your feelings about VB and Perl.

Loads of Java people talk about Kotlin, but it's less popular than lisp

Google pushes Dart/Flutter, but it's less popular than lisp

Even Typescript is rated as less popular than Lisp.


Instead of being snippy, you could click two links and find yourself looking at the entire source code of a web browser written fully in Common Lisp by the team putting out the article. How's that for a snippet!


If you’d see snippets you probably would run away at the sight of all these parenthesis


(because (why (not lisp)))


Lisp code I write today may run 10 years from now, but would I be able to read it a month from now? Maybe I just never got familiar enough with it, but code I wrote for my AI class was meaningless to me 6 months later.


That's a general problem, not a Lisp problem. Code will always be harder to read later, especially code from when someone was new to the language (I wouldn't want to read most of my grad school AI class code from 15 years ago either). But that's not unique to Lisp code. My Java, Python, C, C++, Smalltalk, SML, Scheme, Prolog code from other courses in college would almost all be pretty incomprehensible (depending on the scope of the particular project it was composed for, and how far into my academic career it was written). If you're a moderately experienced programmer and you spend more than a few weeks learning Lisp, you should be able to write code that's perfectly understandable months later.


It's very common for me to revisit Common Lisp code I wrote 20 years ago because it still works and I've forgotten what algorithm I used. Within a couple of minutes I have one of two reactions: "Hey that was pretty clever" or "I get how this works but this coding style is idiotic." Usually the latter.

So yeah, I can glance at a Common Lisp program and tell pretty quickly what it's doing. That comes from studying 10,000 Common Lisp programs. It's pattern recognition.


The other reaction is "why didn't I write more comments". Old me hated future me, I guess.


One better be able to read old code. For example the first lines of the SBCL Lisp implementation were written around 1980, when it was called Spice Lisp. The version from 1984 looks readable: http://www.softwarepreservation.org/projects/LISP/cmu/Spice_...


There are Fortran II program that I wrote decades ago that I would totally struggle to remember what was going on.

I have had the same experience with C++ that I wrote years ago.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: