Not as fun, but there is a Rust book that teaches you Rust through teaching you to programming 2D games. It is called Hands-On Rust [0], and it is from PragPub.
When learning a language, I always look for books that actually make you do something rather than tediously, boringly walk you through the user-facing API.
“The Little Schemer”, by Daniel Friedman, takes a different tack, but, like LoL, is entertaining while teaching you the scheme language using the Socratic method.
Lisp is awesome, but I wish more places used regional pricing for online purchases. It's kinda crazy paying 40 dollars for an ebook since it's almost 1/5 of our minimum monthly wage (Brazil)
2021 was the 2nd year in a row that there was sales on Apress where they throw out great Lisp books for 6/7/8 Euro printed or digital. Subscribe to r/lisp or r/common_lisp and you will get the news.
Yes, books introducing languages by implementing fun applications, like games in this case, usually helps me keep motivation and interest longer than other kinds of “Introduction to ..” books.
Same goes for me. I get tired when a book is practically a boring walkthrough of the user-facing API of a language. So, I look for something that actually makes me do something fun with the language that is being taught.
I wonder if Hylang can be used to work on this book, for someone approaching Lisp from a Python background, or the impedance mismatch between Hylang and CL is too high to allow that.
I think LoL is too CL-specific. If you know both languages first, you can pretty much translate, but since they'd be trying to learn Lisp in the first place, this is a bad idea.
On the other hand, [Hissp][1] has a pretty good tutorial for anyone coming from a Python background.
Funny, but the premise is insane: that Lisp is ful of things that are done better elsewhere and that Lisp is the weapon against bugs. I worked professionally in Lisp (Scheme) and this is 100% false. In my experience dynamically typed languages (incl. Lisp, Python, and Smalltalk) will absolutely be more bug prone than a good typed language (like Haskell or Rust) on non-trivial programs.
I was slightly surprised to learn how well Common Lisp had implemented its types. I keep wondering why CL almost completely failed to break into the minds of people in early 2000s. That was about the time I first learned about python, which kind of seemed to be everywhere. It took 5-10 years before I even heard of Common Lisp.
And now it seems to me that the Common Lisp which was pretty much fixed in 90s is superior in many ways (runtime, programming environments, typing -- to mention a few) to even the revised python3 of 2021. And then Javascript, essentially a bad clone of Lisp, got popular? Makes absolutely no sense.
I’m also surprised by this and I continue to love Common Lisp over all other languages. But I think one reason is that languages aren’t just languages. The amazing parts of Common Lisp were all standardized in 1992 or whatever. By people who are not at all involved in any kind of Unix, Linux, open source, web, scripting, etc. It’s like a beautiful cultural legacy that’s maintained by some enthusiasts and a couple of insular commercial vendors. Now what I really wonder is why nobody outside of some Scheme dialects have stolen the restartable condition system, which is so amazing and straightforward to implement.
> By people who are not at all involved in any kind of Unix, Linux, open source, web, scripting, etc.
Common Lisp on UNIX appeared in the mid 80s, long before the ANSI CL standard.
Scott Fahlman was one of the five designers of the early Common Lisp. He headed the CMU CL project, which was a) on UNIX since around 1984 and b) public domain. Code from CMU CL was used in a dozen other implementations.
Other well known CL implementations for UNIX which were developed before 1994, when the ANSI CL standard was published: Allegro CL, GNU CLISP (free), (A)KCL (no cost, free, later renamed to GNU Common Lisp), LispWorks, Lucid CL, ...
Three large commercial implementations of CL were developed initially exclusively for UNIX and were available end 80s: Allegro CL, Lucid CL and LispWorks.
Generally the language came out of well funded research labs and companies and was designed to be portable across a large variety of operating systems (like UNIX variants, VMS, LISPMs, DOS/Windows, Mac OS, ...).
Ex a simple programming web site: if you want to use your iphone, working copy on you Gh-pages, edit test and publish a web pages with js,css&html —- you can. With full version control and even use library external if you have to.
Second issue : external integration which python as script and c (and c++) as os level “script” …
Third issue : hard to use it’s package system.
I am not anti-lisp, just spend $700 to get a Casio ai-1000 and trying to use ulisp.
Just not main stream like.
God’s programming language as said not used God and by mortal.
Well, for me, it's just not ergonomic. Unlike something like Python.
I solved this year's Advent of Code in Common Lisp in an attempt to learn it better. I determined in the process that the language was awful by 2021 standards and if you wanted a Lisp that was actually usable, go with Clojure or a decent Scheme.
I can see how Python is clearly more ergonomic than Common Lisp, but I really don't see any significant differences between CL, Clojure and Schemes. Just ergonomic micro-optimizations.
Clojure's native thread-safe data structures are a significant difference, though.
* Absolutely nothing is consistent. When you mutate something, is the place it goes the first or last argument? No consistency here.
* When you pass a value to a function, is it by value or by reference? Who knows? Rules are non obvious, do not follow the principle of least surprise.
* Lisp-2 just makes working with higher order functions obnoxious.
One thing it has going for it though is the loop macro, that's admittedly pretty neat.
There is no "by reference" in Common Lisp; everything is a value. Some values have reference semantics. This makes no difference unless you're mutating, or making unwarranted assumptions about the eq function.
To understand most code, you can just pretend that all values have reference semantics. If mutation is going on and/or the eq function is being used, you have to prick up your ears and pay attention to that detail.
If you're mutating any object, it is necessarily a value with reference semantics, period. Objects that do not have reference semantics are immutable.
Some objects that cannot be mutated (like numbers) can have reference or value semantics depending on how they are implemented. For instance, a bignum integer always has reference semantics. Small integers usually have value semantics: they fit into a machine word with no heap part. In that case, all instances of the number 0 or 1, and some range beyond that in both directions, will always be the same object according to eq.
If you're mutating an object (and, thus, something that has reference semantics), the difference that the reference semantics makes is that other parts of your program may hold a reference to that object; your code has not received a copy. If you haven't accounted for that, you probably have a bug.
Sure, this stuff isn't obvious; unless you already know another dynamic language like Ruby, Javascript, Python, ...
Complete neophytes have to be taught it from the fundamentals.
No, there really is no consistency to Common Lisp's mutation.
$ sbcl
* (defvar a (list))
A
* a
NIL
* (defun x (v) (push 'b v) v)
X
* (x a)
(B)
* a
NIL
Now if you were to do similar with something like an array, the original variable would be mutated. Just another example of how Common Lisp doesn't have any sort of internal consistency. Once again, rules are non obvious, do not follow the principle of least surprise.
> Now if you were to do similar with something like an array, the original variable would be mutated.
That is false. To do a similar thing with an array, we need a non-mutating operation which returns a new array which is like an existing array, but with an element prepended.
Then we need a macro to mutate a place to replace an existing array in that place with a new such an array.
Then, exactly the same kind of behavior will be reproduced:
(defun array-cons (obj array)
(let ((new-array (make-array (list (1+ (length array))))))
(replace new-array array :start1 1)
(setf (aref new-array 0) obj)
new-array))
(defmacro apush (val array-var)
(assert (symbolp array-var) (array-var)
"fixme: simple implementation: ~a must be symbol")
`(setf ,array-var (array-cons ,val ,array-var)))
[1]> (defvar a #())
A
[2]> (defun x (v) (apush 'b v) v)
X
[3]> (x a)
#(B)
[4]> a
#()
What? Of course; we are not mutating any object here, but a variable: the local variable of x.
[5]> (apush 1 a)
#(1)
[6]> (apush 2 a)
#(2 1)
[7]> (apush 3 a)
#(3 2 1)
Lists work this way because they are made of cells, and those cells are immutable (if you want them to be) for very good reasons. This is part of the essence of Lisp since the dawn of the language.
It makes less sense to treat arrays that way. It's possible, but you need an exotic data structure to do it even halfway efficiently; that structure will never be as efficient as an ordinary mutable array for ordinary array work.
Whereas, treating singly linked lists this way is almost free of additional cost.
No, it's not. Notice how, in my example, the resultant list is updated in the function parameter, but not the initial var defined by defvar. Whereas if I made an array via (make-array), passed it into the function, and updated that by the way the language documentation tells you to (setf and one of the aref functions), you'd end up with both the function parameter and initial var both pointing to the updated value. These are two logically different behaviors! And that's exactly what my criticism stated: "When you pass a value to a function, is it by value or by reference? Who knows? Rules are non obvious, do not follow the principle of least surprise."
> Lists work this way because... It makes less sense to treat arrays that way.
Yes, that's exactly the point. The language is inconsistent and does not follow the principle of least surprise.
> When you pass a value to a function, is it by value or by reference? Who knows?
Value. Value, value, always value. However, it seems that maybe you don't understand exactly what the value is that you're passing.
> Notice how, in my example, the resultant list is updated in the function parameter
No [0]. It makes a new list (really a new cons cell, which contains the new item and then points to the old list [1]), and assigns that to v. And your example doesn't actually show this update happening, it just shows the return value of push (it so happens that it is updated, however). The original list --- passed in or otherwise --- is never changed. You say that you're "updating" a list, but you're not mutating or updating your data structure at all --- you're making something new, and assigning that to v.
> These are two logically different behaviors!
They are two logically different operations. Are you sure that you understand the data structures you're working with, and the operators that you're calling on them? Did you perhaps try to map concepts from another language into Common Lisp, find functionality that looked similar on the surface, then become surprised when the results were not identical?
Linked lists differ fundamentally in structure from arrays, and so the operators which are commonly used with them differ in turn. Perhaps you would like to compare arrays to vectors, as a closer approximation in data structures, with similar typical strategies of manipulation?
> does not follow the principle of least surprise.
You keep saying this, as if that is that. But nothing about your example is surprising to me, so I suppose this is a matter of perspective.
> Value. Value, value, always value. However, it seems that maybe you don't understand exactly what the value is that you're passing.
This is a distinction without a difference.
> ...you're making something new, and assigning that to v.
Yes, I know that. The point is that the behavior seen by the user for similar operations is entirely different between lists and other pieces of Common Lisp.
> They are two logically different operations.
Yes, obviously. The point is that Common Lisp does an absolutely crap job of making these things actually consistent from the point of view of the user. The language is littered with entirely inconsistent behavior and choices.
> Common Lisp does an absolutely crap job of making these things actually consistent from the point of view of the user
Common Lisp provides a decently designed sequences abstraction which allows the encapsulated vectors and traditional unencapsulated lists, as well as strings, to be manipulated not just similarly, but by exactly the same code.
This was developed in recognition of the exactly the issue that you are getting at. Forty years ago, the group of people designing Common Lisp were aware of this desire to have a consistent access method for different kinds of sequences and they did something about it.
The charge of "absolutely crap job" can only be fairly leveled at a language that make no effort to provide for uniform treatment of encapsulated vectors and unencapsulated lists.
What looks "surprising" depends on your background. I agree that Lisp contains surprises for someone who has programming experience, but that experience is limited to Python or Javascript which provide only encapsulated arrays as the principal sequence aggregation mechanism.
I had two decades of programming experience coming to Lisp, and had written C programs which used both unencapsulated lists:
list_node *list = NULL;
and I had written code with encapsulated ones:
list_block list = LIST_EMPTY_INITIALIZER(list); // eg circular, expanding to { &list, &list }
leveraging the advantages of both.
So, I wasn't surprised in any way. From the description of NIL and the cons cell linkage in the book I was reading, I instantly recognized it instantly as the unencapsulated style of lists, like "list_node *list = NULL".
In C, it would be obvious that this can't work:
list_node *list = NULL;
list_append(list, list_node_new(42));
// wrongly expecting list to have changed from NULL to non-null!
but that, with an encapsulating list block object instead of a raw pointer to a node, this could work:
(Because C does not provide either of these, you can't blame the language for misunderstanding anything: only yourself, if you wrote that list yourself, or the library author.)
I remember that was intriguing to me how Lisp is getting away with the unencapsulated single linkage for everything: like how that is the list structure for the entire language. In the light of the functional programming (you can always just keep consing up new conses to transform lists) together with the garbage collection, it very soon clicked for me. I remember thinking that if we could just keep mallocing new nodes and not worry about freeing, that would be pretty nice to work with in C.
I've always wanted to try Lisp but the fact that it is dynamically typed has always scared me away (and my experience with dynamically typed languages is the same - they are bug magnets). Rust is my go to and I don't think that will change any time soon.
As I'm sure you know, there's a few good strong typing libraries that you can use at any layer of code.
A bit like how python has type hinting + libraries to have static typing.
But there is something to be said about it not being standard. Library code and existing codebases generally (in my experience) don't do that.
It really is the incredibly weird problem of a language being more extendable and capable than anything else.
But because of that it's really hard to work on existing code as you have to enter with 0 assumptions if you want to avoid getting "tricked" while catching up.
I thought there was some interesting history in the book and certain common lisp implementations like SBCL are incredibly fast and powerful (some real secret alien technology there), but I thought that I could generally make shorter and clearer programs in just about any common scripting language (Python, Perl, PowerShell...etc). Granted, I know those languages better, but the book honestly didn't want to make me write Lisp. Homoiconicity is insanely powerful, but I find that specific syntax for certain things is easier for me to read and write up to a certain point.
The best part of Land of Lisp is learning to use SLIME. Except that learning it (and REPL-driven development) makes all other language/editor integrations seem inadequate! (one exception I know is CIDER for Clojure, also excellent)
Not my experience at all. I have spent enough time doing REPL-driven development to know that I most definitely do not prefer it. It might be superior for you. But that doesn’t automatically mean that it will also be superior for anybody else. We are all different, with different preferences and work styles.
Sure. I think REPL-driven development is great for what I call exploratory programming. When you are playing around with ideas and want to quickly try out different approaches and quickly test if a new function is working. However the code that I normally deal with are very large code bases where a lot of the code is not written by myself. And I find it a lot easier to navigate and build a mental model of where things are and how it works using an IDE. It simply works better for my brain but of course YMMV.
Why not both? Are the two concepts somehow incompatible?
A REPL is a (very) powerful tool for testing, probing, inspecting (etc) code; but it is only one tool. It should be a part of your IDE (if you go for such things), not replace it.
Wouldn't it make more sense to automate the tests instead of doing it manually? I have 9000+ auto tests for the large complex C++ code base I am currently maintaining. And the result is zero customer production bugs in the last 5+ years. And I routinely refactor and rewrite large parts of the code without worrying about it.
Yes, automated test suites are great! I don’t know that they need all the things some people stuff into them, but the idea is a useful one.
If you take my comment in the context of REPL-driven development, it’s going to seem tangential at best. But you mentioned that you prefer IDEs over REPLs; I’m merely suggesting that the two need not be exclusive.
Unless you’re advocating for TDD, in a strict sense?
Not OP, but I would venture that REPL-driven development encourages a style that is essentially trial and error programming.
This may be good for small scale, short lived scripts and projects but it does not scale.
My experience from a somewhat large (400+ people) Clojure project is that people would be better off building a test suite as a checked-into-source-control REPL substitute to drive their experiments and make sure the expected application behaviour stays the same over time.
REPLs may be useful, but don’t use them as your only tool.
Yep a good match for how I am thinking. One of the large projects I work on is written in C++ and it has more than 90000 use case tests to make sure it is safe to deploy into production. I haven’t had a customer production bug in the last 5+ years.
For me, the biggest problem with REPL-driven development is the handling of global state. That's why there's no fewer than 5 (!) actually-being-used Clojure libraries for handling it.
What "handling of global state" are you referring to? The state of "vars" or the state of actual application data (a la "atom"s)? The former being solved with the various ns-* functions, the latter depends on the application and how state is being used. If you could, could you share the libraries you're referring to too?
My reaction as well. The exaggerated claims of Lisp programs being bug free and non-Lisp programmers being slaves/prisoners/drones etc. doesn’t give me confidence in anything the author claims. I know it is supposed to be funny, and I do enjoy the humour, but I would prefer some real life stories about how complicated problems were fixed in Lisp better than what I could have been done in other languages. Lisp has cool features, but that doesn’t necessarily result in better software being developed in Lisp. Certainly looking around there doesn’t seem to be a lot of successful software written in Lisp. The world runs on other languages.
Lilypond (http://lilypond.org/index.html) is music typesetting software that is not for everyone, but can be a joy to use if you have a software background.
It makes extensive use of Scheme under the hood, because C++ proved "too inflexible". I've never looked at the details of what the authors meant by that.
They have some notes on it here if you're curious:
Thanks for the link! That was a great read. And a good example I think of using Lisp embedded in a C++ program to give the user more power and flexibility. I think it is the right way to do it when your UI is basically code. A lot of games do something similar (using embedded Python, Lua or Lisp) to make it possible for gamers to extend the game. I think Factorio use Lua for this. But Lisp would have been another good choice I think.
I'm familiar with game engines embedding scripting languages, but using Lisp for that honestly hadn't occurred to me (despite being an Emacser that has written several small elisp packages).
Are you aware that Crash Bandicoot for the Playstation and a bunch of follower games were originally written in a Scheme dialect, with the development environment written in Common Lisp (on SGIs)?
Yep and I think it is super cool. More stories like that would do more to improve Lisp adaption than yet another “Lisp is great” article. Don’t make empty claims. Do something cool and then talk about it.
Ha - that neat little music video is good marketing. I'm half tempted to try the book based on that. The only thing really stopping me is that I hate dynamically typed languages, but still I wonder if I shouldn't try Lisp just to "complete my education".
Be aware that SBCL does pretty good compile-time type checking. It helps a lot during dev. And there's the new https://github.com/coalton-lang/coalton to have like Haskell atop CL.
Lisp also has run-time type checking. In contrary to statically compiled languages, lisp doesn't throw away it's types at run-time, and is always type-safe.
But one of the big advantage of typed languages is that you CAN throw away that information and still be safe (proven by compiler). You don't have to pay for them at runtime.
I would phrase it a bit differently: Lisp gives you great tools for fighting bugs (which most code initially will have). In that sense, the comic i right.
Also, given all the powerful features, the programmer is more productive and can spend more time on bug-hunting.
Personally I didn't get that message amid all the cutesiness, and I don't see how anyone would get that message if they weren't coming to the comic with that belief already.
Can you elaborate on what Lisp gives you to fight bugs that other languages doesn’t have? I know Lisp quite well and I am not sure what specifically you are thinking about?
As I often point out, though; I don't think we have any real insights as to what makes something that will be adopted by the masses. Outside of throwing effort at it.
Too many of us in the technical space get caught on the idea of throwing that effort at the solution and the problem. You can also throw effort at the actual adoption by the masses. That is, market outreach and general sales.
Land of Lisp (2010) - https://news.ycombinator.com/item?id=19677292 - April 2019 (80 comments)
The Land of Lisp - https://news.ycombinator.com/item?id=15417735 - Oct 2017 (135 comments)
How Lisp is Going to Save the World - https://news.ycombinator.com/item?id=5030803 - Jan 2013 (229 comments)
Land of Lisp - https://news.ycombinator.com/item?id=3481456 - Jan 2012 (7 comments)
Land of Lisp - https://news.ycombinator.com/item?id=3013673 - Sept 2011 (6 comments)
Land of Lisp is finally out...and has a music video. - https://news.ycombinator.com/item?id=1836935 - Oct 2010 (108 comments)