Hacker News new | past | comments | ask | show | jobs | submit login
What are the enduring innovations of Lisp? (2022) (elliottslaughter.com)
180 points by eslaught on June 1, 2023 | hide | past | favorite | 160 comments



Homoiconicity wins in my book. Clojure is the best proof I have of this: What mattered was not the linked list or the cons cell. What mattered was that all code was just nested calls to functions and macros with the SAME general form: (function arg1 arg2 arg3 ...)

Expression-based programming using this core form is trivially easy, and makes immutable data structures downright pleasurable to work with. It also makes code generation easy, though I've never taken full advantage of it; even so, a DSL made of runtime functions is not only possible, but a natural extension of anything you're already doing. Backing those homoiconic expression forms with typical high-performance data structures like arrays and hash maps, as Clojure and Janet do, is a winning formula for making complex tasks simple.

All the patterns of yesteryear for modeling problems begin to look quaint when your program eventually just becomes a pyramid of expression calls, with `main` at the top and database hits at the bottom. The organization of the stuff in between really just becomes a question of standardizing function signatures and organizing modules so humans can easily navigate them.


Embedded in this response is a consideration for how programming languages handle data. In Java, if you have a config file, does that file use Java syntax? No. What is that Java file written in? Likely XML. Can you write XML literals in Java? No. Wouldn't it be nice to:

    XMLElement el = <html><body>Hello World!</body></html>;
In JavaScript, we have probably the closest to Lisp in this regard with JSON. You can define JSON literals in your JavaScript code; config files like package.json are written using this object notation. If you want to work with, say XML, there are libraries to convert your XML to JSON and you can take it from there.

But, JSON is limited. How do you define dates for your datatypes? Probably use a string. In Lisp, you can use a string or have some (date year month day) form. How do you represent rational numbers? Maybe you'll risk using floating point or you'll just go with a string. Lisps have rationals built in or you could roll your own with (rational numerator denominator). So your JSON

    {
      'name': 'John Doe',
      'birth-date': '1970-01-01',
      'account-balance': '1000.01'
    }
can become

    (user #:name "John Doe" #:birth-date (date 1970 1 1) #:account-balance 100001/100)
When Lispers need to interact with JSON, they convert it to S-expressions. When Lispers need to interact with XML, they convert it to S-expressions. Wouldn't it be nice if the other languages had syntaxes that made you want to define data using that language's syntax and not JSON or XML?


Visual Basic .NET actually does have XML literals [1] - one of the few reasons to use it!

[1]: https://learn.microsoft.com/en-us/dotnet/visual-basic/progra...


Scala also had it, but seems they replaced it in Scala 3 with a more general concept[1] as everyone thought it was a bad idea to special case XML (why XML, not JSON or something else?).

[1] https://docs.scala-lang.org/scala3/reference/dropped-feature...


It's worth looking at Rebol [1], a language that is also homoiconic which was influenced by Lisp, Forth, Self & Logo and had a big influence on JSON too!

  user: [name: "John Doe" birth-date: 1970-01-01 account-balance: $1000.01]
[1] https://en.wikipedia.org/wiki/Rebol


> had a big influence on JSON too!

It might have influenced Javascript, but the entire idea in JSON is to have a data exchange syntax that can be plonked into Javascript as a literal. That requirement leaves room for no other influence, pretty much.


Here's a video where Crockford mentions Rebol (and others) being an influence on JSON - https://www.youtube.com/watch?v=-C-JoyNuQJs&t=1233s

I've seen it mentioned on Rebol chatter that Crockford approached Carl Sassenrath (creator of Rebol) to open-source & use Rebol prior to creating JSON. So having a Javascript literal wasn't on Crockford's mind at that point.

NB. Rebol was closed sourced until 2012.


This makes sense but to be honest, how I'd really like to represent data to go along with code is as a table or a tree (i.e. presentation/interaction more like a spreadsheet).


A Lisp response to this might be: tables and trees are just special forms of lists.

You can work with tables, trees, maps, vectors, etc. and if they can support list operations (filter, map, reduce, first, last, etc.) then they can be treated like a list and you can do those operations. You can still build IDEs that will show you tables of variables at runtime and graphs of your tree structures.


Very much agree. It also gets you some really useful structural editing capabilities as a side-effect. I can shiffle code around so easily and blindly by just telling it to move an expression right or left, or adjust parens in or out. I also don't have to learn new weird syntax every time the language wants to do something kinda new. So many new Clojure "language" features are actually just libraries which is awesome. Tooling just keeps working and others can provide competing implementations. For instance clojure.spec


This is the best part. Especially with JavaScript with so much difference in runtime and the importance of babel. Even the `loop` DSL can be broken down to basic lisp structure. The flexibility of the language, by allowing itself to be altered, is the most important aspect for me. After completing a first version of a solution, I often find my mind wandering searching for a cleaner, more understandable version of the code. And this is usually done by molding the language to the problem domain.

Something like React in JavaScript exemplifies this. JSX was added for an easier developer experience, but the issue itself is created by the separation of code and data in the language. And CLOS would be perfect for the Component model.


Although if you want to be maximally pedantic about it, homoiconicity you actually get for free in raw machine code. Though Lisp certainly "rediscovered" it in an abstract language(though Church's lambda calculus is also homoiconic, and directly inspired much of Lisp so I suppose he deserved some credit too).

I sometimes ponder about whether we'd be able to recognise any programming languages from alien civilisations. And I usually come to the conclusion that it would have to be either a lisp or a forth, because at their core they're so dead simple it's hard to imagine not inventing them by accident at some point. The fact that Lisp was invented so incredibly early on in computing history, and at least partially by accident, supports this hypothesis.


> It also makes code generation easy, though I've never taken full advantage of it;

I bet you have because Hiccup is an HTML DSL! Compojure is likewise a DSL for building Ring handlers based on HTTP request types.

But to your larger point, I also find I don't reach for macros at all. Looking carefully, I think this is because we have first-class syntax for vectors and maps, and those containers are very flexible about the type of their contents. Combined with the Seq abstraction and a well equipped standard library most Clojure code I write is macro-free.


> most Clojure code I write is macro-free.

Conceptually, most forms that are not on the form (fn args...) are macros. You got like cond, lambda and label forms, that are not macros. I don't think Clojure technically has label forms though.


You are correct. I should have clarified. I meant I don't write many custom macros myself.


Perhaps I'm being pedantic, but `if` and many other control-flow constructs are implemented as macros in Clojure. So you probably use macros a lot!


In CL, IF is a special form and COND is a macro.


So everyone uses C.


SBCL is self-hosting. It feels like a bit of a stretch to say syscalls are macros, but if you really want to torture language that way then sure because the kernel is mostly C.


In case of Clojure I would argue it is the combination of very consistent - simple some would say ;-) - syntax and stability in the language that partially stems from the simplicity and, as you mention, at the same time extensibility of Lisp's approach "function first, arguments later" and homoiconicity. However, dynamic typing, immutable data structures by default, software transactional memory built into the language itself, stellar interop with Java (or JavaScript in case of ClojureScript) and the REPL workflow that eases things like hot-code reloading are all major contributors and reasons, why Clojure(Script) users are usually quite enthusiastic about the language family.

Don't get me wrong, Clojure does have some warts inherited from the approach of the host platform. E.g. I think its math could do nil pruning by default with e.g. unchecked-add and similar skipping nil pruning and overflow checking (for best performance in some tight scenarios). ClojureScript basically does nil pruning already.

As a side note, I would love if ClojureCLR picked up steam a lot more. I heard .NET is not as dynamic as the JVM which makes these things harder but if ClojureCLR would be on the same level as at least ClojureScript in terms of support and tooling, the Clojure family of languages would be really hard to beat for any kind of business application development. With ClojureDart, Babashka, NBB and to some degree Clojerl, Joker, Jank and others it seems the family extends much beyond the original "business applications" area into scripting, embedded and to some degree HPC or network programming as well. I guess we will see where it catches on.


No better example of this than Racket.

Function application itself is an overridable function. (f 1 2 3) compiles to something like (#%app f 1 2 3). And you can hook into that.

Runtime type checking, stack traces, debuggers..


You used to be able to do fun stuff like that in clojure but it ended up being axed in the name of performance.

(Well, I guess you still can, but the standard library is off limits now basically.)


In most cases, the new definition of #%app is a small wrapper around the standard one, with some special case. If the new definition is small enough, the Racket compiler can inline it and hopefully detect that it's the standard case and you get the same speed at the run time.

In some cases, it may be useful to use a macro to "force" the inlining, but it may increase the total size of the code.

There are a few languages implemented in Racket that do this, https://docs.racket-lang.org/rackjure/index.html https://docs.racket-lang.org/lua-manual@lua-lang/index.html that redefine the #%app and ITRC in most case get the fast application after the compilation.

In any case, if someone has a similar project with a custom #%app that is too slow, hey can ask using github or discourse and I(we)'ll try to help. In some case it's impossible, but in other case some tweaks make the redefinition of #%app more optimizer friendly.


> All the patterns of yesteryear for modeling problems begin to look quaint when your program eventually just becomes a pyramid of expression calls, with `main` at the top and database hits at the bottom. The organization of the stuff in between really just becomes a question of standardizing function signatures and organizing modules so humans can easily navigate them.

You don't need homoiconicity for that; in fact expressions tend to be easier to write in a language that's slightly non-homoiconic. Even Lisp or TCL fans tend to use a macro or similar to embed mathematical expressions, because regular infix mathematics is significantly more readable than Polish notation.

Having a good way to write data literals is important, and the lack of it is a big part of why Algol-family languages are so awful - C/C++/Java/etc. code ends up being stringly typed because it's easy to write literals for strings and a bunch of random number formats, and cumbersome to write literals of anything else. But that doesn't mean your data syntax has to be exactly the same as your code syntax; some similarity is helpful, but the benefits of making your code look exactly like the AST of your code are pretty marginal.


> because regular infix mathematics is significantly more readable than Polish notation

I agree but maybe this is simply the result of our math education


Expressing data and instance declarations in the same syntax you express code and class definitions is an essential feature that enables "Instance First Development", and the "Instance Substitution Principal", as supported by OpenLaszlo, and described by Oliver Steele. And throwing declarative constraint based programming into that mix is really synergistically powerful and expressive.

https://news.ycombinator.com/item?id=21841054

DonHopkins on Dec 20, 2019 | root | parent | next [–]

My remark was just an old Java joke I repurposed for Ant!

"Java is a DSL for taking large XML files and converting them to stack traces." -Andrew Back

https://www.reddit.com/r/programming/comments/eaqgk/java_is_...

But in all seriousness:

OpenLaszlo used XML with embedded JavaScript in a way that let you extend XML by defining your own tags in XML+JavaScript. I've done a lot of work with it, and once you make your peace with XML (which seemed like a prudent thing to do at the time), it's a really productive enjoyable way to program! But that's more thanks to the design of OpenLaszlo itself, rather than XML.

https://en.wikipedia.org/wiki/OpenLaszlo

OpenLaszlo (which was released in 2001) inspired Adobe Flex (which was released in 2004), but Flex missed the point of several of the most important aspects of OpenLaszlo (first and foremost being cross platform and not locking you into Flash, which was the entire point of Flex, but also the declarative constraints and "Instance First Development" and the "Instance Substitution Principal", as defined by Oliver Steele).

https://en.wikipedia.org/wiki/Apache_Flex

https://blog.osteele.com/2004/03/classes-and-prototypes/

The mantle of constraint based programming (but not Instance First Development) has been recently taken up by the "Reactive Programming" craze (which is great, but would be better with a more homoiconic language that supported Instance First Development and the Instance Substitution Principle, which are different but complementary features with a lot of synergy). The term "Reactive Programming" describes a popular old idea: what spreadsheets had been doing for decades.

OpenLaszlo and Garnet (a research user interface system written by Brad Myers at CMU in Common Lisp) were exploring applying automatic constraints to user interface programming. Garnet started in the early 1990's. Before that, Ivan Sutherland's Sketchpad explored constraints in 1963, and inspired the Visual Geometry Project in the mid 1980's and The Geometer's Sketchpad in 1995.

https://en.wikipedia.org/wiki/Reactive_programming

http://www.cs.cmu.edu/afs/cs/project/garnet/www/garnet-home....

https://en.wikipedia.org/wiki/Sketchpad

https://web.archive.org/web/20160303205845/http://math.coe.u...

https://en.wikipedia.org/wiki/The_Geometer%27s_Sketchpad

I've written more about OpenLaszlo and Garnet:

What is OpenLaszlo, and what's it good for?

https://web.archive.org/web/20160312145555/http://donhopkins...

>Declarative Programming: Declarative programming is an elegant way of writing code that describes what to do, instead of how to do it. OpenLaszlo supports declarative programming in many ways: using XML to declare JavaScript classes, create object instances, configure them with automatic constraints, and bind them to XML datasets. Declarative programming dovetails and synergizes with other important OpenLaszlo techniques including objects, prototypes, events, constraints, data binding and instance first development.

Constraints and Prototypes in Garnet and Laszlo

https://web.archive.org/web/20160405015129/http://www.donhop...

>Garnet is an advanced user interface development environment written in Common Lisp, developed by Brad Meyers (the author of the article). I worked for Brad on the Garnet project at the CMU CS department back in 1992-3.

https://news.ycombinator.com/item?id=17360883

[...]


> The organization of the stuff in between really just becomes a question of standardizing function signatures and organizing modules so humans can easily navigate them.

That’s one impressive load-bearing “just” you have in there..


> functions and macros with the SAME general form: (function arg1 arg2 arg3 ...)

Macros don't have this general form in Lisp. macros just have a symbol in the prefix form, but the rest enclosed objects are not a simple list of args. The enclosed forms are arbitrary and the interpretation (parsing, transformation, ...) is done by the macro.

These can be valid macro forms:

   (infix c := a + b )

   (rule :if   (a < c and temperature > 20)
         :then set climate-control to cooling)


But those are lists of args. The macros happens to be processing the lists in a non-linear way, but they're still just lists.


Most people assume that args are restricted to Lisp syntax and that args are evaluated, like in function calls. But inside a macro form this is not the case: the macro can implement a whole new syntax and semantic -> then to say the enclosed items are 'args' is misleading: it's source code in a new sublanguage with a different syntax/semantics. Technically the macro gets called with arguments, but only as part of the process of code transformation. For the user there can be a different syntax and semantics. The enclosed code then may have a different syntax where the (operator . args) syntax is extended or no longer used.

That also means that the simple (foo . args) pattern is now no longer valid inside the macro form and analyzing the source code of a macro form can be arbitrarily complex.

> The macros happens to be processing the lists in a non-linear way

Macros are code transformers. They get code as lists and return new code as a list. This generated code then is evaluated.

Different from functions, macros are not processing normal arguments and returning an evaluation result, but they are code transformers. The resulting transformed code is then run and it returns a value.

Thus we have two (interleaved) phases of execution, instead of one:

* the code transformation phase

* the evaluation of the code


> the macro can implement a whole new syntax and semantic -> then to say the enclosed items are 'args' is misleading: it's source code in a new sublanguage with a different syntax/semantics.

Not exactly. It still needs to conform to something the Lisp Reader can absorb, i.e. something that looks like lisp. Then there are reader macros. That's where the true extensibility resides (and in addition to the new syntax elements, you'll need to reserve boundary keywords for that syntax).


> Not exactly.

That is exactly the point. Lisp syntax works differently. s-expressions are a data language and the Lisp syntax is not defined over characters, but over s-expressions.

> i.e. something that looks like lisp.

No, it would just look like s-expressions. You could define a completely different syntax and semantics on top of s-expressions: Logic (like Prolog), Postfix, ...

No one says that s-expression need to have the operator first. That's what Lisp defines as syntax. But you could write a postfix in s-expressions_

  ((3 4 +)
   9
   (pi sin)
   pi)
   2
   *)
Above is a valid s-expression, but it is not valid Lisp. As such it does not look like Lisp, but it looks like a postfix language encoded on top of s-expressions.

If a reader would reverse all expressions, then it could be executed as Lisp. You could also define a syntax where parentheses are replaced by significant indentation.

s-expressions are not Lisp syntax, they are a data syntax which is used to encode Lisp.

Historically s-expression were also defined only for data and Lisp code used m-expressions for programs and s-expressions for data.

CADR would be written similar to:

   cadr[A]=car[cdr[A]]
The function would then be called on data:

   cadr[(1,2)]

   -> 2
That's would Lisp code might look like, if it were not found out that one could als represent the code as s-expressions and that this would have interesting effects. Lisp designers tried to get away from this syntax several times.

ML switched [] and ():

similar to:

  fun cadr (l) = car ( cdr (l)) 
called then similar to

  cadr([1,2])
(read) reads every s-expression. Not just Lisp code. It knows nothing about the syntax of the Lisp constructs: DEFUN, LET, DEFCLASS, UNWIND-PROTECT, DOTIMES, ... When then EVAL gets called the s-expression external syntax is gone. EVAL gets Lisp code as Lisp data, not as characters or strings.


Yeah I think this is pretty much the only thing that still matters as an advantage: explicitly writing the source code as a tree, where the left part of the leaf specifies how to interpret the right part. Makes transformations pretty straightforward. Anything else is just window dressing at this point.

Too many people will end up using it to write horrible DSLs but it's the spider man thing I guess. (Please, if you think your problem is best solved by a DSL, reconsider.)


> Too many people will end up using it to write horrible DSLs but it’s the spider man thing I guess.

What is “the spider man thing”? Sounds like an interesting concept I’m not familiar with.

Also, why do you think DSLs are a bad idea? I’ve only ever heard the claim that they are a game-changer, but it’s never been clear to me why. So I’d love to hear the counter argument.


DSLs are great if your problem and requirements are set in stone and never change forever. Unfortunately every project I've ever had in the real world has not been the case. Every time some DSL was used to specify the logic and behavior, it ended up being an albatross around the project neck that kept it from adapting to the new requirements. It would have always been better to just use a proper programming language in the first place.

I put the DSLs into two camps. The first one, the one that I'd say is okay to use, is like Google GCL or jsonnet. This is basically what's known as a "configuration language". It has a small amount of power to let you do a bit of text processing and abstraction, but not too much. If at any point you need more than those offer you, you don't want a DSL anymore. You just want a real programming language.


"With great power comes great responsibility."

In Spider-Man's origin story one of the first things that happens after he gains his powers is that he tries to make some money as a masked wrestler. A guy robs the box office at the wrestling venue and Spider-Man lets the guy escape because it's none of his business. The same robber later shoots and kills Spider-Man's uncle, from which Spider-Man takes the lesson expressed in the above quotation. The same lesson then colors a large fraction of the character's storylines forever after.


Ironically, Clojure is the weakest of the Lisps when it comes to metaprogramming.

Reader macros are forbidden, and culturally, the community avoids macros. There are some good reasons to prefer functions of over macros, but one consequence is the community has less experience with metaprogramming, and uses it less.


PostScript is as truly homoiconic and polymorphic and interactive as Lisp.

And by truly homoiconic I mean not trivially and uselessly homoiconic like TCL, where "everything is a string".

And at the same time it's "JSONic" in the sense that it supports the full range of JSON data types, and is polymorphic in the sense that objects (as opposed to variables, arrays elements, and dict slots) have type and can contain objects of different types (unlike Forth, which is untyped, and is also commonly compared to PostScript because they're both stack based).

But of course PostScript was designed decades before JSON was a thing. However, the point is that Lisp S-Expressions don't directly support polymorphic dictionaries, but PostScript (and JavaScript/JSON, and Python) do.

The PostScript-based NeWS window system:

1) Used PostScript code instead of JavaScript for programming.

2) Used PostScript graphics instead of DHTML and CSS for rendering.

3) Used PostScript data instead of XML and JSON for data representation.

And PostScript (and thus NeWS) also has an interactive REPL like Lisp, to support live and exploratory programming. Imagine being able to telnet to an X11 server and create windows and draw on them interactively!

PostScript is not only homoiconic, but also point-free (or "tacit"), like Forth!

https://en.wikipedia.org/wiki/Tacit_programming#Stack-based

https://en.wikipedia.org/wiki/Talk%3AHomoiconicity#PostScrip...

https://news.ycombinator.com/item?id=18317280

>The beauty of your functional approach is that you're using PostScript code as PostScript data, thanks to the fact that PostScript is fully homoiconic, just like Lisp! So it's excellent for defining and processing domain specific languages, and it's effectively like a stack based, point free or "tacic," dynamically bound, object oriented Lisp!

The fact that PostScript code IS PostScript data, without any intermediate AST representation or reflection API, means that a PostScript data structure editor is also a code editor.

PostScript's homoiconicity and interactivity (plus the fact that PostScript is great at drawing scalable text and graphics) makes it easy to make a visual programming language and debugger interface for PostScript, with a graphical direct manipulation REPL loop that supports "direct stack manipulation" by dragging objects on and off the stack, and editing code and data by drag-and-drop and cut-and-paste.

PSIBER Space Deck Demo

https://www.youtube.com/watch?v=iuC_DDgQmsM

>Demo of the NeWS PSIBER Space Deck. Research performed under the direction of Mark Weiser and Ben Shneiderman. Developed and documented thanks to the support of John Gilmore and Julia Menapace. Developed and demonstrated by Don Hopkins. Described in "The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines".

The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989

https://donhopkins.medium.com/the-shape-of-psiber-space-octo...

Written by Don Hopkins, October 1989. University of Maryland Human-Computer Interaction Lab, Computer Science Department, College Park, Maryland 20742.

Abstract: The PSIBER Space Deck is an interactive visual user interface to a graphical programming environment, the NeWS window system. It lets you display, manipulate, and navigate the data structures, programs, and processes living in the virtual memory space of NeWS. It is useful as a debugging tool, and as a hands on way to learn about programming in PostScript and NeWS.

PostScript Source Code Available Here:

https://www.donhopkins.com/home/pub/NeWS/litecyber/

Introduction: Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts … A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding …. [Gibson, Neuromancer]

The PSIBER Space Deck is a programming tool that lets you graphically display, manipulate, and navigate the many PostScript data structures, programs, and processes living in the virtual memory space of NeWS.

The Network extensible Window System (NeWS) is a multitasking object oriented PostScript programming environment. NeWS programs and data structures make up the window system kernel, the user interface toolkit, and even entire applications.

The PSIBER Space Deck is one such application, written entirely in PostScript, the result of an experiment in using a graphical programming environment to construct an interactive visual user interface to itself.

It displays views of structured data objects in overlapping windows that can be moved around on the screen, and manipulated with the mouse: you can copy and paste data structures from place to place, execute them, edit them, open up compound objects to see their internal structure, adjust the scale to shrink or magnify parts of the display, and pop up menus of other useful commands. Deep or complex data structures can be more easily grasped by applying various views to them.

[...]


Definitely, when you can directly manipulate your AST with easy, metaprogramming becomes a breeze.


It’s also worth noting that code generation (and therefore metaprogramming itself) are also not fundamentally innovations of Lisp. For example, in C++, it is entirely possible to link your application to libclang and build Clang ASTs inside your application C++ code, and use the Clang compiler to emit and run that code.

Ahem: can we discuss the word innovations and fundamentally above, and put two facts on the table:

LISP: 1959 C++: 1979

If they'd said "is not unique" I could agree. The innovation is a statement of origination. C++ did not originate this concept into a language system from 19 years prior to C++


C++ is not from 1979. What is from 1979 is the "beginning of work on C with Classes", which is entirely different. It was not until 1985 C++ got its first real implementation and a specification.


Does that undermine my central point?


I am not trying to undermine anything :D.


To anyone not already experienced with Lisps, there's varying schools of thought and opinions on what's important and valuable, within Common Lisp (CL), and within the broader Lisp family.

For example, although this writer doesn't think macros are important, the Scheme (and especially Racket) branch of Lisp ran with macros, then with various other DSL support that take macros further (like Racket `#lang`). Racket also moved towards a strict definition of phases, and a very nice module system that works with that.

That might horrify some CL people, because it moves further away from the dynamic REPL live manipulation strength of CL, but others of us have found the tradeoffs very practical for our needs.


Hygienic macros and a strict phase separation are not distinctive to scheme, many languages have this now, most importantly Rust. And just like Rust macros scheme macros are not really an organic part of the language but some extra edifice bolted on top. Scheme definitely deserves credit for pioneering work here, but the only aspect that's of enduring distinctiveness that I'm aware of is Racket's #lang, which basically gives you a less messy and more powerful version of what you could do in Common Lisp with macros and read-tables.

My impression is that hygiene itself (which the scheme community tended to obsess over) is of minor practical benefit, but the fact that you get good error locations (because not using plain lists and symbols makes it easy to carry sufficient contextual information around[^1]) is a major upside.

Out of curiosity, are there additional important practical benefits you see, macro-wise, over Common Lisp (that would make up for the gimped repl)? I.e. in addition to better error messages?

[^1] I seem to remember being told Allegro Common Lisp does a good job here, but I assume identity still imposes some major limitations.


Rust macros are not hygienic. The biggest issue with this, I suspect, is the visibility limitation: due to lack of hygiene a macro can only refer to a public value. Thus some crates will publicly expose a value with a doc string saying "this is supposed to be private please don't abuse it our you'll clobber the invariants this crate otherwise upholds".

https://doc.rust-lang.org/reference/macros-by-example.html#h...


> Hygienic macros and a strict phase separation are not distinctive to scheme, many languages have this now, most importantly Rust.

My understanding is that Rust's macros are only partially hygienic. They fall short of Racket's. To the best of my knowledge, Racket has the most hygienic and expressive macro system of any language today. The people behind it have put a lot of work into it over the past couple of decades, producing more than a few significant papers in the realm of PL research.

> My impression is that hygiene itself (which the scheme community tended to obsess over) is of minor practical benefit

I assume you've not written many macros that generate identifiers before. I assure you, hygiene is quite important for safely reasoning about your syntax!

> are there additional important practical benefits you see, macro-wise, over Common Lisp

Racket sports a focus on what they call "language-oriented programming". The gist of this community philosophy is that all significant software really is an API (or a layer of multiple APIs), and by treating these APIs as "languages" we can make them more ergonomic. Expressive macros enable a style of programming where you can make your API look however you want while still implementing it within whatever other language you're using. Pretty much all of my Racket projects end up with at least a few macros, though it's worth pointing out that the community also stresses that things that can be functions should be functions rather than macros.


I think hygiene vs. defmacro is pretty simple: syntax-case is explicit non-hygiene where defmacro is explicit hygiene.

I prefer the former, even though syntax case doesn't go far enough. As it is in r6rs bindings are introduced unhygienically within the extent of a macro transformer, which stinks for complex enough macros. Sadly srfi 72 never caught on.

The benefit of the thing giving us the gimped repl is that the runtime can know what something is at compile time. Modules can be compiled with something akin to blocks in SBCL, speeding up procedure calls in ways you can't really achieve with inline caches.

Chez spends capararively very little time worrying about things like that, yet manages to have cheaper procedure calls than SBCL almost always.


> Hygienic macros and a strict phase separation are not distinctive to scheme, many languages have this now

Somebody has already written that Rust doesn't. And I wouldn't say 'many', I know of Elixir which does have them.


Rust macros are partially hygienic, but even scheme's macros turned out to have unintended hygiene violations (https://okmij.org/ftp/Scheme/Dirty-Macros.pdf).

> And I wouldn't say 'many', I know of Elixir

Rust, Julia and Elixir are three fairly mainstream languages with hygienic macros, and there are many more obscure languages (Dylan, and I believe Perl6 aka Raku) to outright esoteric ones (PLOT), as well as hygenic macro add-ons like sweet.js.


Rust macros still fail short, hence why so many crates need to drag syn for the ride, making even more of a mess of compile times.


Honestly just using Steel Bank Common Lisp is the best choice if you are looking to build high quality software which just works.

A lot of my software which earns a $4 million profit per year has SBCL sub systems though that is slowly decreasing as we are moving away from Lisp.


I'm curious. May I ask what you're moving toward and what the motivations and reasons were? What was great and not great about building on/with Lisp?


I am moving towards Go.

The greatest benefit of SBCL is that it's got great performance and the REPL jack in allows you to debug any application state. Building CL software is just amazing.

So as the company is growing really fast, and I never want to talk to a VC, I need to add those reliability into the system as I can't spend a lot of time training people.

That can only be done by having the highest performance to simplest code ratio (since we lose the repl jack in).

Go is the clear winner here after trying a bunch of them.

It is also easy for people to learn and the amount of tutorials and resources online is great.

It is a bit sad, but the Lisp hacker bucket is a really small pool if you want to hire from so at the end of the day I had to compromise.

Having said that Go is quite a workhorse and has the simplicity of C so it is actually not that bad.


How many CL hackers do you need?

What about having a core of a few people, and leverage that Lisp productivity potential? If you need a lot more "bulk" work (say, for customer integrations/customizations), is it something that the core people can make easier? Such as with APIs or DSLs, and recipes, so that this other set of programmers doesn't have to all be CL experts?

You'll probably have to pay good money for that core of great CL hackers, though. Go programmers are more numerous, and maybe easier to find competent ones at commodity rates.

And is the number of CL hackers available on the job market decreasing? ITA found a lot of them, at one point. There are many Scheme/Racket programmers than there are jobs.


Are you able tontalk about your product and SBCL's role in it?


If you have some great Lisp hackers who are also experienced in industry team software engineering, you step back and let them use whatever they decide is best to use. :)


Did you build it on your own? And why are you moving away from Lisp?


Yes I wrote all the first images. I love Lisp and think it is amazing if you approach it in the right way.

Now the company is growing at a rate that I need to hire people and build teams.

That is where Lisp is a hard bargain. The bucket of people who can write a new system from scratch without falling for the common traps is really small.

So that is why we are slowly transitioning away.


As someone who dabbles in common lisp I would love to know some examples of common traps are in designing a system in cl, since I am probably bound to fall into many :)


> Now the company is growing at a rate that I need to hire people and build teams.

And yet you didn’t post on the Who’s Hiring thread. :) Not that I’m looking, nor am I not looking, but sounds like an interesting gig.


What is the field of your software product?


...code generation (and therefore metaprogramming itself) are also not fundamentally innovations of Lisp. For example, in C++...

Wait, what?

Maybe I'm confused but doesn't innovation mean doing something not done before?

C++ and even C are more recent than Lisp, they can't be used as counterexamples. Or am I missing something?

(Edit> other than that, I forgot to say: nice article.)


I think it’s actually a problem throughout most of the article. But I think we can get the point of what the author means: certain programming language features that have enduring impact rose to because they were in Lisp (and Lisp had a period of real prominence). So, they’re not strictly speaking innovations of Lisp, but Lisp d was responsible, as a matter of historical fact(ish), for them becoming more widespread.

I say “(ish)” because I very much suspect that if you did actual careful historical investigation of the sources, you would find that there’s vanishingly less genuine creation ex nihilo with computers than the standard stories say. Features or ideas that later become prominent, typically seem to be “in the air” or inchoate when the person or people we give credit to for creating them “created” them.


I just started learning Janet, and one of the things I really like is being able to use '-' in variable names. I didn't realize how much friction snake-case or camel case, ie having to use shift in var names, generate. It's a small thing, but a thing that differentiates lisps and algols.


I love kebab case. I want it everywhere.


I discovered recently that bash function names and aliases allow -

  pi@4b:~ $ kebab-case-ftw () { echo yum; }
  pi@4b:~ $ kebab-case-ftw 
  yum
  pi@4b:~ $ alias kebab-kebob='echo yum'
  pi@4b:~ $ kebab-kebob 
  yum


Doesn't work for variable names:

  $ foo-bar=3
  foo-bar=3: command not found
Consistency in Unix? Sacrilege.

Make is better in this regard. You can have variables with . in them and with computed variables, that can simulate structures. $($(VAR).member). $(VAR) expands to some abc, and so the $(abc.member) evaluates that variable with a dot in its name.


Same. Outside of Lisp & some shell, you've got the OG Cobol and Raku. HTML attrs? CSS? It's a shame really. So easy to type and read.


This is a really good explanation of why I find Julia (effectively a Lisp in terms of these features) to be indispensable. The ability to generate code on the fly makes life so much easier that I just can't live without it.


Yeah I often describe Julia as a Lisp in sheep's clothing.

Or as the m-Lisp promised to us :) I chuckled when I read:

> The way that common Lisp systems produce executable binaries to be used as application deliverables is by literally dumping the contents of memory into a file with a little header to start things back up again.

Which is pretty much of Julia's sys-/pkgimages work. Pkgimages are an incremental variation on this idea.

One of the novelties in Julia is the world-age system and the limits on dynamisim it introduces on eval.


Pity that Apple didn't push Dylan.


I agree that Julia satisfies the first two properties however, it's not clear how it satisfies the third one (homonicity). In particular, how the argument with regard to Python does not apply to Julia as well?


I think this answer https://stackoverflow.com/a/31734725/5141328 by one of Julia's creators fits here. The TLDR is that homoiconicity is a mix of two separate things: how willing a language is to represent itself, and how close the syntax for code is to the syntax for the data-structure representing that code. Julia meets the first of these, but not the second. Whether this matters depends on why you care about homoiconicity. The biggest difference between Julia and Python here is that Julia has syntactic macros and python doesn't (although see https://peps.python.org/pep-0638/)


I just don't understand people who call von neumann style programming languages "lisp-like" or "almost a lisp". I've heard people say this of python and haskell as well, and I just don't see it, at all.


https://a.co/d/6NaRjQG

The "condition system" is niftier than i've seen elsewhere.


Indeed. For those who aren't familiar with the concept, you can consider them as the logical conclusion of exceptions.

Traditional error handling, as found in C for example, forces you to handle the error at the moment you detect it. Often, though, that's deep in a library, and what to do about the error depends on the context.

Exceptions allow the a function to declare that when an error occurs in its dynamic scope, it should receive control to handle it. This is, in many ways, a major improvement.

However, consider a program that is parsing a data file. Halfway through the file, it encounters a malformed record. In an exception-based language it would throw an exception, unwinding the stack until you get to the main program logic. However, at that point you've closed the file, losing your position in it and any partially-parsed records. The only real recovery options are to abort reading that particular file or abort the load entirely.

Conditions allow the function that parses a record to declare that it can recover from a malformed record by replacing the binary data with something else, producing an error record, or producing some record that is given from the outside. Similarly, the code that loops over the records can declare a recovery path that skips the malformed record and continues with the next one. Then, when an error occurs, the main program logic can inspect the broken record (possibly by presenting it to the user) and instruct the condition system as to which recovery path to execute. Only then does the stack unwind, and only as much as necessary to get to that recovery path.

In short, exceptions separate detecting an error from handling it. Conditions add a third part, deciding how to handle the error.


I've read the same, almost word-for-word claims repeatedly, which is annoying given how misleading it is.

Concretely...

To know the context of the conditions, the conditions must give the information. Otherwise, the handler would need to know intimately the implementation details to be able to retrieve the filename, line number, etc. If you can provide the information to the condition you call fill a throw exception with the exact same data. The exception can carry the filename, line numbers, token being parsed...

Second, the example itself is ludicrous. The caller of a file parser providing replacement data for a malformed file? In what world does that ever happens? How could it handle every possible ways a file might be malformed?

Third, in every language, the same can be implemented with a callback. In C++, the standard is now to use std::function for this, which supported free functions, members, lambdas... pretty much everything. The only advantage of List is that the declaration and registration of the callback is a language feature.


Yes, you can do this in ways other than conditions. However, I disagree with basically every objection you raise.

First, the decision made by the handler case doesn't necessarily care which file the error was in, what the line number is, etc. All I've ever needed to decide what to do (details below) was the text content of the malformed record. From the perspective of my program, there were only a few possible cases:

1. The record is malformed in a way that I know how to recover from. In that case, I can just do so and invoke the `use-instead` recovery path. 2. The record is damaged in a new and exciting way. I can log it and try to muddle on, in hopes of catching all the new error cases while I'm off doing more interesting things than waiting on a 6-hour job. 3. The record is damaged irrecoverably and future records depend on it. (e.g., the file structure itself is damaged and this can't be recovered from). This is the rarest case I've come across, but also the only one that's convenient to handle with exceptions.

Further, if it was just a filename and byte offset that was needed to resume where I left off, you may have a point. However, suppose that there was an additional decompression step involved. You can't, with most decompression libraries, pick up decompression in the middle of a stream, at least not without littering knowledge of the decompression through the entire process. Further, bundling everything necessary to pick up the computation where it left off forces you to structure your code in a certain way. For example, packing the state of the computation into a class with member functions doing each part, so that the file, current list of results, etc, are essentially scoped globals. I estimate that this would have been at least 5x more code than what was essentially wrapping a stream with a couple of transformers and iterating over it.

To your third point, I could have structured the parser to call a callback with the details of the problem, which could throw an appropriate exception to unwind to a suitable recovery point. This is, after all, how conditions are implemented. However, conditions as part of the language mean that every error can have recovery paths registered, not just ones that the developers thought to provide callbacks for.

And finally, to your second point. While I originally stole the example from Practical Common Lisp, I've since had exactly this situation come up. I had a ~150GiB file containing, essentially, lines of JSON. My parser validated that the incoming JSON fit a schema and processed it into a more compressed form such that I could fit the aspects of the dataset that I actually cared about into RAM. Now, this dataset had been through several migrations, between a number of different platforms, and not all of the migrations were bug-free. In some cases, it treated UTF-8 as CP-1251 and transcoded that into UTF-8. Others got double-escaped. Still others had parts of some fields duplicated in ways that were easy to detect and undo. Some records were just duplicated outright, and some were different versions of the same record. All of these were recoverable, but it was 150GiB of data. I couldn't manually clean it first; I needed to run the program to see what it barfed on in order to fix it. Worse, being JSON, it compressed easily and this was at a time when 150GiB was more than half the disk space I had available to me. So of course the dataset was compressed on disk, and I was decompressing it as I read it.

Now, I'm sure that you can come up with a way that I could have packed the error recovery into the callback in the inner loop of the iterator, but why would I have? I had conditions at my disposal, and the way I actually did write it, I had the happy path in a perfectly clear straight line, and all of the various error cases and how to handle them lined up in a row next to it. The code was easy to read and work with, without any real efficiency cost. The fact that I could have made do with callbacks is no more relevant than that I could have made do using goto instead of loops and functions: we have these abstractions so that we can express what we want our programs to do at a higher level.


What do you people DO that you are doing tasks like this? I've always loved FP from college, but never ran into problems like this that would actually warrant using one


In a small sense, doing yet another data migration in a long line of data migrations. The data I had was a collection of proprietary test results about electronic parts, half hand-entered and half generated by various tools some of which dated back to the 60's. (I strongly suspect that the dataset started out as a stack of punched cards used with an IBM System/360). In other words, about as boring big-business as boring big-business stuff gets. Nothing about the task really required using Lisp, but it it was the language I pulled out of a hat to start playing around with the problem and I got far enough within the first few days of experimentation to promote my prototype to the actual solution.

In a larger sense, and probably more relevant to your question, it's not about any particular application space where FP is "warranted", but rather about being familiar enough with enough different languages and architectures that I can look at a problem and see a variety of ways to solve it. Second, I always build a prototype in a "weird" language that I don't intend to put into production, because the prototype is more there to understand the problem than to come up with a production-ready solution. Weird languages add friction to deciding to productize the prototype, and therefore encourage me to really consider what tech stack is appropriate for the actual product.

Finally, it's probably worth noting that I rarely work on anything particularly interactive. If you're not touching GUIs or web stuff, you'll often find that you're a lot less constrained on your tech choices, because you don't actually need all that much from libraries.


In general, building programs interactively by changing them as they run. That's the default, normal way to write programs in certain languages (notably in Common Lisp and its antecedents and in Smalltalk).

In such languages, you want to be able to use the entire language at any arbitrary point, including at a point where execution has halted for the moment because of some error that has occurred in some arbitrary place. The condition system provides a nice set of tools for doing that.


> providing replacement data for a malformed file? In what world does that ever happens?

that sort of pattern happens in coding theory, error correcting codes for example.

> In C++

i don't claim to know the answer, but does it matter that the free variable allocation strategy for a Lisp lambda differs from what lambda means in C++?


Lisp also has a distinction between kinds of conditions but using the same underlying mechanism. You can have a plain signal which is informational. For instance in a data processing task you can signal your progress, and if there is a handler above it might update some GUI rendering of your progress or print out something to a terminal. If there isn't, nothing happens. For errors, you can have a handler which will select the restart option or otherwise handle the error, and if there is no handler you'll be brought into the debugger (typically, some modes of execution might cause a program to simply crash/terminate).


https://www.amazon.com/Common-Lisp-Condition-System-Mechanis...

Unshortened URL: The Common Lisp Condition System by Michał "phoe" Herda.


- ~no syntax

- induction

- induction

- some taste for minimalism (although CL/CLOS might feel different back in the days)

- human exploration oriented (repl, mop/updates)

- open homoiconicity, as a programmer lisp is an open box, makes you grow more in depth

- understanding of high and low levels in one place

- radical taste for innovation.. do whatever, you're near free


I think that Lisp taste for minimalism was very much a product of the world adjacent to Lisp and early Lispers: logic. Look at the lambda calculus itself. That whole golden era of metamathematics in the first half of the 20th century.


Yeah, induction and minimalism are highly linked to mathematics and logic. They always try to find smallest expression of highest abstraction. Unlike hacking culture (like engineering, accepts compromises for the context at hand) or commercial ones (enjoy bloat for finance/psychological reasons).


Lisp was a very early, if not the first, case of a language where types are associated with values, not variables. The evolution of implementations of Lisps showed that such languages could be implemented efficiently, even on stock hardware. This last realization took a while (witness how lisp machines were being developed into the 1980s.)


What will be interesting to see is whether the idea of a Lisp Machine comes back as ASICS become more popular. Will hole machines be designed to optimize performance with a DSL created for some particular domain? In other words, architectures optimized for specific languages (because those languages are optimized for specific domains), not necessarily literally Lisp Machines.

That thought popped into my head because of the “enduring“ in the title. What endures depends entirely on the needs of, and approaches taken by, people in the present.


I don't think lisp machines as such will come back, since experience showed many of their features were unnecessary. Instead, it will be interesting to see if there are any specific things that hardware assist could help with. The tagged pointers referred to in a sibling comment here could be an example.

On the flip side, it would be interesting to see if a "modern" lisp-like language could fit better with modern hardware realities. Today's efficient data structures are more array oriented, for example.


In a way, the ongoing adoption of hardware memory tagging to get rid of C memory corruption issues feels like Lisp Machine's revenge.


No, Lisp did not invent if statements. They're in Fortran's 1956 Programmer's Reference Manual.

http://bitsavers.informatik.uni-stuttgart.de/pdf/ibm/704/704...


Fortran 56's IF statement is very different from the Lisp if statement; the former is a conditional jump, and the latter is a structured control flow construct. Further, Fortran 56's DO isn't a do/while loop as we know it today, but rather a conditional come from. The entire concept of structured control flow was invented by Lisp, and it was very controversial at the time.


Did Lisp improve the IF statement? Certainly. But here's John freakin' McCarthy citing Fortran's IF statement which he had used. That means Lisp didn't invent the IF statement. Lisp invented better syntax.

http://jmc.stanford.edu/articles/lisp/lisp.pdf


He invented the conditional expression.

  (print (cond ((> a 10) "a is larger than 10")
               ((> a 20) "a is larger than 20")
               (t "a is smaller than or equal to 10")))
COND takes zero or more conditional expressions and returns a value.


The third point is a little convoluted to me and doesn’t seem to be a beneficial innovation so much as a design choice.

Really for most of these benefits you can reductively boil them down to “code is data” because serialization-of-code, first class functions, REPLs, etc all more or less follow from that single major innovation.

This single innovation has more far reaching effects than many people realize as once people figured out that code-is-data and AST serialization means little pieces of code (not full programs) could be transmitted and executed over a network, it has enabled massive improvements in data processing through things like MapReduce and distributed databases.


LISP gave us eval/apply, encoding all of computing in purely symbolic terms [1].

Consequently, we can manipulate a Lisp program as-is under almost any computing medium; as VMs for Turing machines and Quantum machines, as direct silicon hardware (Symbolics), under (paper-assisted) wetware (e.g. solve chapter 1 of SICP entirely by hand).

[1] https://www.gnu.org/software/mes/manual/html_node/LISP-as-Ma...


In Clojure, thread macros. Allows very terse but powerful operations on data structures in a very readable way that’s easier to reason about than nested calls in C -style languages.


didn't garbage collection originate with lisp?

if so, that should really be considered its most pervasive innovation


probably so, but surely the most widely used early garbage collector was in line-based BASIC.


Which Lisp is the most practical and easy to pick up for a programmer? I already tried emacs lisp but the experience of running emacs was not great, so I gave up


Clojure is a serious contender.

To be sure, it's still niche but it has a good ecosystem, piggybacks on mature java libs, and has an active community.

It's thoughtfully opinionated, which I appreciate. Also clojurescript is almost the only front end dev experience I will tolerate.

The hot reloading is magic and it is gangsta as F to use the exact same logic on the fast jvm (or CLR if you're nasty) backend as in the js front end.


I tried to get started with Clojure but their set up page is so confusing? The Windows version is still in beta? And intellij does not have a template for it like for Scala and Jetbrains even supports F#. I thought it was a mature language


Common Lisp was created and used by the DARPA and its community of public research labs and private companies to support all kinds of industrial and scientific projects, I don't think there is more practical Lisp. It's fast both in compile time and runtime, supports any programming paradigm you want, and it's one of the most interactive programming environments ever. There are multiple open-source implementations, and there are at least two commercial implementations with paid support.

If you want to start, install the new IDE for Common Lisp called Lem [0] and follow the free online book Practical Common Lisp [1]. If you have any questions, the community can help you on Discord with the language itself [2] and the IDE [3].

[0] https://github.com/lem-project/lem/releases

[1] https://gigamonkeys.com/book/

[2] https://discord.gg/cuVpwZXJ

[3] https://discord.gg/sBYyjyC6


Emacs lisp is one of the worst lisps for most things, though it's a decent fit for its specific purpose.

People are suggesting clojure and clojure is great but it also has rigorous immutability semantics. If you're not familiar with that model you'll spend as much effort learning it as learning lisp, and it'll be unclear which things come from lisp weirdness and which from immutable weirdness.

Someone will also probably suggest racket, which has its strengths as a learning language but is also very large and complex, with numerous extensions to the core language that make it kind of a disorienting ecosystem.

I like janet a lot. It uses "normal" data structures as its primitives rather than the traditional cons cells. So if you want to understand and be connected to historical lisp it will feel very different, and be a poor choice. This also applies to clojure though now that I think of it. otoh if you just want to use parens & prefix notation and play with macros either will work.


common lisp is the most practical- its an industrial strength application language, gradually typed, very fast

it even includes the most complete oop system known to man which you can completely ignore if you want to and it'll still be the best choice

not that complex either, maybe halfway between lua and python?

there are some rough edges, which comes from being powerful and unopinionated, but they are trivially papered over as you work


p.s. schemes are 'lispy' but they are not 'lisp' change my mind


What is practical depends on your goals. Clojure can do pretty much everything, though it's a bit less practical for things like high-FPS video games and desktop GUI.


Janet might be a good contender for games. It is very easy to write C modules for the performance-critical bits.


My favourite right now is Chicken Scheme. Just the R7RS specification in general; the language is very minimal, yet has basically all you need. And Chicken has awesome C interop and a great little community on IRC in #chicken.


A Scheme, or Racket, which has the most packages of everything not Common Lisp.

I'd suggest Chez Scheme (the Racket fork if you've got an ARM Mac), that is fast and has (real) threads.


> the Racket fork if you’ve got an ARM Max

Or if you have a PowerPC Mac! https://leopard.sh/leopardsh/scripts/install-chezscheme-9.5....


the innovation of lisp led to the innovations of scheme:

Lambda: The Ultimate Imperative

Lambda: The Ultimate Declarative

Lambda: The Ultimate GOTO (Procedure Call Implementations Considered Harmful)

https://research.scheme.org/lambda-papers/


Indeed,from my understanding of history it seems that lisp brought the idea of homoiconicity through the use S-expressions. Scheme was the first to introduce lexical scoping and in general the idea of constructing the language on a small set of well thought out primitives (see the lambda papers)


bafe, I vouched for your comment. You may want to reach out to the mods (hn@ycombinator.com) because your account appears to have been shadowbanned. You only have 4 comments and all were dead (not marked as [flagged] which is typically there with user flags). You may have triggered one of the system's rules which sometimes catch up new users (for instance, creating an account through some VPNs, reportedly).


Not banned - but yes, subject to extra restrictions because it's a new account. I've marked the account legit now so this won't happen again. Thanks for watching out for a fellow user!


Thank you all! I'm a very new user, and perhaps not as good as a writer as most of you, but I am definitely a legit poster, not a spammer


That's clear, and you're most welcome here!

I'm sorry our software got it wrong in your case, but you should be good to go now.


Great, thanks! It's really encouraging to see a community today were the moderators/owners reach out to users directly.


ΛΛΛ


Here are John McCarthy's own answers to this question, from 1980, about 20 years after he invented Lisp:

The Survival of LISP http://www-formal.stanford.edu/jmc/lisp20th/node2.html

He describes 15 innovations -- it is impressive that these were all innovations when he invented Lisp. He adds, "Of course, the above doesn't mention features that LISP has in common with most programming languages".


solving frontend dev in a permanent way.

https://reagent-project.github.io/


Any language where you can define the type for its own AST is homoiconic, isn't it? So what is the big deal?


notably not 'lisp features that have been absorbed elsewhere' but instead the opposite, lisp features that still make it unique

tl;dr

1. no part of the system off limits

2. pervasive interactivity

3. homoiconicity


The first two sound like core parts of Smalltalk as well.


Smalltalk was proudly a descendent of Lisp.


Perhaps I've misunderstood, but several languages have REPLs so that isn't unique to Lisps, is it?


This blog post explains the sort of workflow a Lisp/Smalltalk-style REPL enables:

https://mikelevins.github.io/posts/2020-12-18-repl-driven/


Interesting; when you define a function (or anything) in a breakloop, does it get saved to source? (or is there an option to?)

I'm trying to imagine using this style of programming in Python, but the type-redefinition problem is still there


Sometimes a break-loop is all you have.

But usually there is something around it. Typically a Lisp system can run several REPLs at the same time, one might be a break loop. You can define a function in another REPL and use it in the break-loop. You can define the function by loading code in a break-loop.

An interaction might be (here in LispWorks):

  CL-USER 6 > (> (sin-x2 3) 0)

  Error: Undefined operator SIN-X2 in form (SIN-X2 3).
    1 (continue) Try invoking SIN-X2 again.
    2 Return some values from the form (SIN-X2 3).
    3 Try invoking something other than SIN-X2 with the same arguments.
    4 Set the symbol-function of SIN-X2 to another function.
    5 Set the macro-function of SIN-X2 to another function.
    6 (abort) Return to top loop level 0.

  Type :b for backtrace or :c <option number> to proceed.
  Type :bug-form "<subject>" for a bug report template or :? for other options.

  CL-USER 7 : 1 > (load "~/sin-x2.lisp")
  ; Loading text file /Users/foo/sin-x2.lisp
  #P"/Users/foo/sin-x2.lisp"

  CL-USER 8 : 1 > :c 1
  T
A function SIN-X2 does not exist. There is an error and we get a break-loop. The break-loop is one level deep. All of Lisp is available in a break-loop: the compiler, the code loader, the evaluator, ... Lisp stays with the break-loop in the context of the error.

I define/write the function in a file and the load the file. Loading the makes the function available in the running Lisp.

Then I use the continue restart to try to find the function again -> the error is gone


In Smalltalk, the code lives inside the image. You can dump it to a file/set of files using "file-out" operation, and import it from files with file-in. But the code is never executed directly from the files - it's parsed, compiled, and executed on read, which creates runtime representation of the code which is saved in the image. When you break into a debugger in the running Smalltalk image, it works on top of the same infrastructure, giving you direct access to the actual code as well as its textual representation at all times. Smalltalk is also highly reflective, so the compiled code (methods) and objects (classes are objects) is available for introspection and modification. Though it's not the best introduction, this post may help get you interested in the paradigm: https://blog.bracha.org/exemplarDemo/exemplar2021.html?snaps...

It's impossible to do with Python, or any other language that wasn't created with image-based runtime. I tried. This approach only works when the "vertical integration" reaches from the very bottom (compilation, stack handling, memory allocation, object creation) to the very top (editing the source, refactoring, test runners, etc.) It's incredibly powerful when it works and is done right. Imagine your whole OS being built around GDB, with extremely late binding of everything and each method being a separate .so/.dll. It would probably be incredibly slow, which is why Smalltalk and Smalltalkers pioneered JITs - but it would also allow you to override any part of your running system while it's running.



My understanding is that you usually write the code in your editor and use an editor command to send the code directly to the Lisp process (that’s what I do, anyway). Or just copy-paste.

I’m not aware of a way to get code back out of the Lisp environment, but I’m fairly new to all this.


That is exactly the clarification I needed. Thanks!


There’s very rarely any where you can actually interact with the running program. At best you can usually load in a method and poke it, but that’s not the same.


So, the REPL mentioned in the article differs from, say, the Julia REPL[0]? Is it simply using the same term for slightly different things?

[0] https://docs.julialang.org/en/v1/stdlib/REPL/


One of the other comments in this thread links to: https://mikelevins.github.io/posts/2020-12-18-repl-driven/ which makes a distinction between "having a REPL" and "supporting repl-driven programming". Modern languages which have a REPL generally don't support the sort of repl-driven programming they refer to. Julia's Revise.jl [1] supports one way of repl-driven programming, and it is in fact the recommended way of doing Julia, but from my understanding it's still very different from the Lisp ways.

[1] https://timholy.github.io/Revise.jl/stable/


The Julia REPL front end is written in Scheme IIRC


I believe it's Julia's current default parser that's written in a Scheme, the REPL is a normal Julia standard library written in Julia itself: https://github.com/JuliaLang/julia/blob/master/stdlib/REPL/s...


also we are very close to moving Julia's parsing (but not lowering) to pure Julia as well.


OK thanks for the correction!


that would suggest it's an "enduring innovation" as per the question


But the previous post argues that the article is about "lisp features that still make it unique".


Forth.


YAML.


Useless parentheses?


Sigh. How many parentheses does this have:

    myFn(x,y,z);
vs.

    (myFn x y z)


That's strawmanning. Indeed, function calling needs the same amount of parentheses. But literally every popular languages (C/C++/Java/Javascript) have syntactic clues for :

     - assigning values to variables
     - function definitions
     - denoting elements of structures
     - if, for and while control flow
     - fields of structs or objects
     - etc
Which also play a huge role in structured/OOP programming. Less popular languages usually have syntactic clues for the important parts of their paradigm.

Once I transpiled the example code from the Janet site[0] to Python (I don't speak Janet.) You can compare them, and count the parentheses, and the syntactic clues in general: https://news.ycombinator.com/item?id=34846516

[0] : https://janet-lang.org/


Jesus, it was a joke.

The major diff is that with imperative langs you are not using functions as much as with functional programming langs. Even though in C# and other langs functional patterns becomes more and more common. Which is also my main criticism against functional programming langs. Functional programming is just a design pattern among others. Lispians will disagree.

During the last 25 years I heard this garbage over and over that functional programming will take over. I worked at companies that invested heavily in some of the functional programming langs (like F#). All of them have been heavily crippled by it. If functional programming was so great it would have conquered the world by now. It has been around 4-5 times longer than smartphones. It is simply garbage. Practice and theory are two different beasts.


It is quite the opposite. Every pair is meaningful. Extra parenthesis cannot be added just for the heck of it without changing the meaning of the code.


There are many parentheses, but they're hardly useless.


Love to meet you at a party! ;) Cheers!


Try Programmer Dvorak if you are too bothered with pressing Shift.


My nickname for Clojure/Lisp is (parenthetical hell).


(nirvana (in hand))


Everything worthwhile about Lisp has been integrated into TypeScript and JavaScript.


Yeah, the same way everything worthwhile about spring water can be found in mud.


Extending the JavaScript grammar into a domain-specific language, and then generating conforming code to run, almost requires adding Babel.js to your build and then doing a lot more work than writing a Lisp macro. Most devs probably never will.


Not homoiconicity and macros. And in the case of TypeScript, not interactivity.

Besides that, Mrs. Lincoln, how did you enjoy the play?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: