["define", ["factorial", "n"],
["if", ["=", "n", 0], 1,
["*", "n", ["factorial", ["-", "n", 1]]]]]
["display", ["factorial", 10]]
Then again, maybe that's too ugly to consider. :-)
In addition to some of the other book and web site recommendations people have made (SICP, MaL, etc...), I think "Lisp in Small Pieces" by Christian Queinnec is really good if you want to take it further.
I think the magic is in getting the interpreter to work, and depending on the personality, that might be what's motivating.
I bought the dragon book long ago. I got about halfway through and it was still on parsing! What a waste. That's not a problem for s-expressions.
That makes me think you might've forgotten what it's like to be a beginner. Imagine teaching a high school student and that every other word you're using doesn't make complete sense to them yet.
Let's put things in perspective: the parser bit of putting together your own lisp is by far the easiest task of all.
Complaining that parsing lisp is hard to a beginner but everything else is accessible makes no sense at all. I mean, a parser for s-expressions requires what? Five or six terminal tokens and three or four non-terminal tokens? That's less than 50loc.
If you're not undertaking this to learn how to write a Lisp, which includes writing a parser, what's the point? Get this tribe 3 shit out of here...
I didn’t write a s-expr parser for the minischeme interpreter I like to play around with but I did use re2c/lemon to generate a parser. Then I got Unicode working…
Couldn’t even imagine what it would take to hand write a lexer that does Unicode properly, even the scheme string and char functions call into generated functions (and a library for utf8 strings) since that stuff is complex.
Also don’t know what tribe 3 is, one of those things the cool kids talk about?
one major difference is that you don't have to write the parser at all the other way
It's trivial to anyone.
Furthermore, there are already s-expression parsers out there.
Suggesting JSON is simply poor, ill-thought out advice.
Wishing JSON strings were symbols doesn't make them so.
In Lisp, strings and symbols have different type. If you have a symbol, since it is not a string, it is not equal to any string in the system.
On the other hand, two different symbols which are unequal can have the same name. This is if at least one of them is uninterned or, in a Lisp which has symbol packages, they are interned in different packages.
Symbols are compared by identity, not by their name.
You misunderstand (or GP was unclear). They don't mean that JSON strings are literally symbols, but in this prototype on-the-fly JSON DLS, the JSON strings represent symbols. A lisp string in this proto-lang would be "\"hello world\"" instead.
Being helpful or on-topic is not his goal.
By the by, there was no ad hominem there. Ad hominem is when you insult someone as a way to avoid what they're saying. What this person is saying has not been avoided.
Probably put the fallacies away: https://laurencetennant.com/bonds/bdksucks.html
I like your idea for making a prettier lisp quickly, but depending on the language/library, isn't this going to give the a user a bunch of nested `match` objects instead of nicely nested and printable lists?
No free lunch - I guess you trade one ugliness for another.
Expressions with parentheses are not a regular language; expressions which can match them are not regular expressions, even if they extend regular expression syntax.
"Regular expression" became the name of a software feature, which retained its name as it was extended beyond regular sets.
Just like "web browser" became the name of a kind of program, and that name still sticks even though it's now a monstrous application platform, not just for browsing.
I said, "instead of a regular language". You even quoted it. You don't even read the bits that you cut and paste to argue against.
> expressions which can match them are not regular expressions
And that's the damned point. Modern "regular expressions" aren't just "regular" any more. They've had back references and other extensions for a while now:
Please - don't reply to me until you take the time to read and understand what I've said.
Skipping the tedium until you've got "hello world" and "factorial working" makes sense to me. You can always go fix the syntax later.
Go through the first half of Crafting Interpreters . And then try to complete mal - Make a Lisp . That's it. You'll only need these two.
P.S. People also say good things about Build Your Own Lisp , but I didn't finish it because I find spending quite some time writing C doesn't make me feel I'm enjoying something elegant.
The example also indicates how the book conflates language and implementation. It's perhaps not the most important thing to know if one is learning how to design and implement programming languages, but unfortunate as CL and Scheme have independent standards; and separating semantics and implementation makes it possible to discuss how differing implementation techniques can be used for the same language. The language implemented also uses dynamic scoping, which is plain weird nowadays.
The C code is of questionable quality, and the author doesn't cover how to improve your chances of writing working C (e.g. using a debugger, testing with sanitizers, etc). This is more of an issue, because the interpreter copies and mutates in a weird way which appears quite easy to mess up, rather than using a garbage collector. The book uses a parser library by the author, so using e.g. the Boehm GC library doesn't seem too bad, but arguably doing your own memory management is more representative of C code that isn't a language implementation. The approach also makes mutation hard to reason about; it's hard to say what a particular update is actually going to affect.
You see issues in their code like confusing the concept of "atom" and "symbol", or symbols just being strings, evaluation level confusions and things of that nature.
There is more than one elegance in Lisp; what people find elegant is subjective. Some people are charmed by a meta-circular interpreter. Some are charmed by interned symbols: two symbols are equal or not based on just some machine word comparison. Some are charmed by the way lists are made of garbage-collected pairs so you can endlessly rearrange lists like pouring water from one container to another, without having to worry about the storage allocation. Some are charmed by being able to quote any part of a program to turn it into a data literal.
None of these things are exactly simple under the hood.
Many elegant aspects in a mdoern Lisp woudn't have been found in the classic Lisp 1 or 1.5. I think it's elegant that a type error (or any other) results in an exception that can be handled. I think unwinding a stack is elegant.
Structural pattern matching is elegant. Bignum integers are elegant. File compilation with fine-grained evaluation control is elegant.
Elegant is about UX, and elegance can come off as simplicity (especially if we examine just one area at a time rather than the integrated whole); but underneath the UX there is a lot of huffing and puffing under the hood.
> What's at the heart of Lisp that makes it so simple and elegant?
In terms of implementing it, probably the Cons Cell. The insight being that the heart of Lisp is pairs, not lists:
If by "lisp" people just mean s-expressions then yes s-expressions are really easy to parse.
We should always seek to educate the pupil graciously.
1. Parse and produce S-Expresseions ("read" and "print").
2. Use the object system of the implementation language as much as possible. If necessary, implement some lisp-specific types like the cons pair, empty list, symbol. First get it to work correctly, you can optimise later.
3. Implement all the "special forms" which can't be implemented using macros in the implementation language.
4. Implement any library functions that can't be implemented in lisp itself using the implementation language.
5. Implement "eval" (evaluates S-Expresseions). Now you've got a working interpreter.
6. Use your interpreter to evaluate macros. Use that to implement any additional "special forms".
7. Write any additional library functions in your own lisp.
That's it, at this point you have a working lisp implementation. There are many opportunities for optimisation, from lambda lifting and continuation passing style to hygienic macros and even just in time compilation.
Together with the YouTube recordings of the lectures about the book will give you many AHA moments about why it’s so elegant etc.
Including topics like types and a prolog like logic language.
But I agree with recommending it to everyone, who wants to be educated about computer programming and writing beautiful, readable and stable code, as it introduces the reader to many concepts like abstraction barriers/layers and making things more readable by splitting into well named functions and elegant solutions to problems.
Reading part of SICP (the first 40% roughly) and really working on the exercises on my own, without looking up solutions early, was a breaking point in my computer programming self-education, as I learned lots of things, that I did not learn at university, where none such high quality material was used.
If you enjoy SICP that's great but if you don't, don't let it stop you.
The title is "Build Your Own Lisp" and it is free to read .
You learn C and build your own Lisp dialect.
The repository of mal also features a implementation for basically any type of language for reference, in case you get stuck.
Fun fact: mal originally was made to see if you could implement a lisp with just GNU Make macro language (spoiler: you can!), after the author saw someone else making a lisp language in bash.
I started working through the book using Rust instead of C, which allowed me to parse things in a safe way, making sure, that my grammar covers all cases in a typesafe way. If anyone works through the book, I recommend avoiding the use of C, which is used in the book itself.
However, before reading part of the book, I already had watched part of the Minikanren uncourse, where Will Byrd writes a Scheme parser (well, the basic part of it, not implementing the whole standard), explaining everything quite well, which made me confident, that I could change the language I use to implement the things in the book and still get things right.
So my recommendation is to look at the Minikanren Uncourse videos. It should be in the first 4 videos of the series.
I did not knew about the existence of MAL.
I will check it out.
Moreover, in line of the original question asked, I would just suggest The Little Schemer. Here you not only learn the language and how to think recursively, but you also learn how to implement a lot of the in-built and library functions that comes with languages.
The Little Schemer was a life changing book for me.
mal has also been "featured" on HN a couple of times in the past. Here are some discussions:
- 2015 - https://news.ycombinator.com/item?id=9121448
- 2017 - https://news.ycombinator.com/item?id=15226110
- 2019 - https://news.ycombinator.com/item?id=21670442
- 2021 - https://news.ycombinator.com/item?id=26924344
According to the schedule, we're bound for another frontpage feature of mal in 2023, looking forward to it :)
See also 'LISP from Nothing', a Nils Holm book:
Before Holm, there was https://compilers.iecc.com/crenshaw/
... at https://github.com/kanaka/mal
Fun fact: I got to work with Joel at a Clojure-based startup, LonoCloud (since acquired by ViaSat). Super sharp dude, and very helpful about MAL. Definitely recommend.
IMO, it's a synergy of several things. Trivial parsing*, homoiconicity, automatic memory management, dynamic types, first class symbols, first class everything†, extensible everything†, heavy reliance on anonymous functions, flexible scope rules.
All of those have since been adopted by other languages, but only Lisps have them all working together. (And not every Lisp has every feature.)
* ... until you consider various extensions
Or, if you want to do a low-level implementation, Quiennec's Lisp in Small Pieces.
Most of that time was the code generator. An interpreter is much simpler and you could have something similar in a couple hours. Especially if you use a JSON parser to start, and switch to S-expressions once you get everything working. Or use an existing Lisp with a (read) primitive that knows the syntax.
As for "why Lisp?", it's a bit like how you never need to read the JSON spec to write JSON in practice, because they unified the syntax into a few core datatypes(primitives, arrays, objects). Besides the simple syntax, they also include a symbol table core concept that tells you how to interpret the minimal syntax.
In most languages, it's a lot of work adding a new feature to the language: Lexer to recognize token, backtracking parser, adding a new internal type, etc. In a Lisp interpreter, you just edit the symbol table to know about a new form and you're done.
Despite the core being simple, a practical Lisp environment is going to have thousands of primitives with a variety of effects on the runtime, and you'll need to look them up. Racket has about 1700 implemented in Chez Scheme, for example. But at least they're "only primitives", so there's uniformity in how they're used. Nothing like "What's the syntax for an anonymous function that takes 1 callback as argument that captures a local by reference and 3 other variables by value?"
If you mean for the full power of a lisp then you still have to pick: for Scheme that might be call-with-cc and hygienic macros. For Common Lisp that might be regular macros and reader macros. Those take more code and thought.
Understanding hygienic macros isn’t all that hard but I agree implementing them is difficult. On multiple occasions I’ve tried to plan out how to get them to work but (ironically) everything I can find only explains the theory and not how to practically implement them.
This confronts a whole bunch of issues directly and immediately. What is a list, really? How do I handle recursion in my parsing--implementation language (fast--but maybe not tail recursive and weak to cycles) or lisp (slower--but probably infinite and handles circular lists)? How do I deal with garbage (long parses kick up a lot of garbage)?
Dealing with dotted pairs is the difference that means you understand implementing lisp rather than are just toying with it.
LISP builds upon a very simple, yet powerful data structure (pairs that can be arranged into binary trees) that has a direct correspondence with its syntax. This means that it is very simple to write a very basic LISP interpreter in almost any programming language, and to use it to bootstrap a more fully-featured one. It's even quite feasible to write one in assembly language, and the very first ones indeed have been implemented that way.
I have tended to implement more complex core data structures than cons-cells. E.g native strings, maps and arrays.
Suggest looking at newLisp and picoLisp for ideas on how non-traditional Lisps have been implemented.
it eats itself into constancy