I am the author of this essay and this is the umpteenth time that it has appeared in Hacker News. If you look it up using the Hacker News search function, it even invites readers to beat the dead horse one more time.
I'm surprised that it still gets so much attention. Every few months, someone volunteers to translate it into another language (see the bottom of the page).
I wrote that essay five years ago. Nowadays, just use Clojure or Racket and ignore what I've written.
And yes, I know, I really need to update the design of the site. I wrote it when I was still a rookie web developer. I'm starting a new job so I'll redesign my site in my copious free time.
I think it's a nice article, no problem with it appearing for an umpteenth time again. Any article that speaks about Lisp is a good article for me!!
There are some pretty good points, however there are one or two things that i take issue to, because they can be misleading:
1. The "lone wolf Lisp hacker" is not the only kind of Lisp hacker. Lisp has been used on important codebases at space missions and there are codebases of million-lines Lisp code at work right now, for example for airline reservations.
There are some Lisp projects on GitHub being contributed to and forked. The amount is small because popularity of Lisp is small compared to the main languages GitHub users prefer (i.e. Java, JS, etc), but they do show there is collaboration between "lone wolves".
2. You write "Unless they pay thousands of dollars, Lisp hackers are still stuck with Emacs."
I have used many IDEs (Visual Studio, Visual Studio Code, Eclipse, Netbeans, IntelliJ, and many Borland products) and the combination of Emacs + SLIME is pretty good, to be honest.
Since you're here, what's your take on why Clojure has become so popular lately (compared to the mindshare that Common Lisp or Scheme seem to have nowadays)? Although I've barely touched Lisp (and never touched Clojure), it seems weird to me that Clojure is so widely used compared to other Lisps. From what I can tell, the main differences are that it's on the JVM (obviously) and that it uses loops rather than recursion, but neither of these really explain to me why things like ClojureScript exist instead of "SchemeScript" or something like that. Do you think it's related to the ideas presented in the blog post, or is there something I'm missing?
My take is that Clojure's community is distinct from that of other Lisp communities.
Old school Lispers are the Jacobites of the Computer Age. Their warnings were ignored, so their nightmares came true; and the world has changed so much, as a result, that few can even imagine an alternative. They were right about nearly everything, and now they are irrelevant.
So pour out a drink for the king over the water and for Genera. Curse this present dark age of bureaucrats and Posix. And then MOVE ON. The Clojurists are the first sizable community of Lispers that has done that.
>Lisp is so powerful that problems which are technical issues in other programming languages are social issues in Lisp.
This is so true.
Common Lisp is like post scarcity anarchist society where everyone already has a replicator. The culture of getting together to solve problems don't exist. In other language, implementing something is so damned hard that if you have half-assed implementation, others will take it and improve upon it because it saves work and it gains momentum. In Lisp world people take a look at it and say: "This is half-assed implementation I could whip something better together in a weekend." If takes a village, village will form. If not, everyone goes their own way and never gets together.
"I am reminded of Gregor Kiczales at ILC 2003 displaying some AspectJ to
a silent crowd, pausing, then plaintively adding, `When I show that to
Java programmers they stand up and cheer.´" -- Kenny Tilton
Time and time again you see the following pattern:
A: Why Common Lisp has no X?
B: Writing rudimentary X takes 2-3 days. Writing it as solid package takes 2-3 weeks. Why don't you implement it?
What A meant is that it would be a standard that could be used across different software packages.
From there it went into Interlisp, Loops, Flavors, CLOS, ...
Gregor expanded these ideas and moved it to Java.
The Lisp community worked 30 years stuff like that and it took several years to standardize the Common Lisp Object System, which incorporates some of the ideas of Aspect-oriented programming.
> What A meant is that it would be a standard that could be used across different software packages.
"there exists" (existential quantification) is valid argument if someone implies "there don't exist".
The argument was common tendency or ratio.
I speak as one who has done numerical programming with CL. I bet most of us have personal numerical library that implements third of what is needed for initial release for numpy or matlab and the code is somewhat incomplete with matlisp or clasp. It's just too easy to accidentally implement almost viable general library for numerical computing in CL. Of course you don't need to document it or polish it because you use it only by yourself.
In retrospect we would have been better off by working together bottom up well documented replacement for R, numpy, and matlab. The amount of work would have been only slightly more.
If you feel the need to contribute code, then there is Quicklisp to distribute it.
> Not as much as they should, which is the whole point of the article, and a core lesson of history.
One of the uses of Lisp is prototyping. The Interface Builder from NeXT/Apple was originally written in Lisp. The Dylan IDE was a prototype in Lisp. There is lots of research code, some never meant to made available as a supported library/application. Maybe somebody writes a theorem prover in Lisp for his/her PhD. The chance for finding users is slim - there are already a few others. But if you wanted to share, check out the ACL2 (A Computational Logic for Applicative Common Lisp), it comes with lots of shared code:
ACL2 is shared and maintained. It's extensions are shared with stuff from AMD and Intel.
Most people don't know that and have never heard of it, because these are niche applications of Lisp, here using theorem provers to check the design of processors.
This is one example, there are a several other areas where people share a lot of code. The canonical examples are the Common Lisp implementations themselves. They share(d) code in various ways. The first CLOS implementation was shared. The pretty printer was shared. The CMUCL compiler was shared from day zero somewhen in 1981 or two - before Common Lisp even existed. SLIME is shared.
All this poetic non-sense about flexibility and writing prototype programs is irrelevant. Lisp has real technical flaws that prevent its use:
1.GC that prevents it from using real-time and performance-oriented code.
2.Pervasive use of inefficient lists and dynamic typing, which puts steep memory requirements on top of performance costs.
3. Arcane write-only syntax which becomes context sensitive with Domain-Specific Languages which Lisp programs devolve to, creating unreadable and overly condensed spaghetti code(especially macros), making existing code inaccessible for developers.
4. Huge runtime which is embedded in every executable, that makes it less competitive with scripting languages. Lisp requires eval() and runtime interpretation which other languages optimize out and replace with fixed code paths. (some external programs called "tree shakers" remove the lisp's runtime cruft, but they're exception rather than rule)
5. Lisp has very loose safety. Any piece of data can be potentially executed at runtime and lisp programs have to be excluded from WriteXORexecute/NX protection, since lisp code/data are intermingled.
Executable stack and heap expand areas for exploits and create a general lack of stability - any bug can corrupt running code, Lisp is so far removed from any concept of safety it even allows the programs to be modified at runtime(this is a fundamental feature shared with polymorphic viruses).
Keep in mind that a compiler like SBCL will put Lisp performancewise in the range of other statically compiled language/implementations.
> 2) puts steep memory requirements on top of performance costs
Such that the Dylan IDE ( https://news.ycombinator.com/item?id=14478942 ) written in Macintosh Common Lisp needed 20 MB RAM. Steep memory requirements! Current Lisp implementations run nicely on a first-gen Raspberry Pi.
Lisp fought with memory efficiency in the 80s (the Lisp Machine operating systems ran in 20 MB RAM + 100MB virtual memory -> which were 4 MWords RAM of 40bit memory), but hasn't grown much since then. The jump to 64bit implementations has brought some size increase. But the size varies widely between large machine code (say with SBCL), space efficient byte code implementations (CLISP) and tiny executables created by whole-program compilers like mocl.
> 4) Huge runtime which is embedded in every executable, that makes it less competitive with scripting languages
If you want to compete with 'scripting languages', then just don't include the runtime. You can use many Lisp implementations just like scripting languages.
> Lisp requires eval() and runtime interpretation which other languages optimize out and replace with fixed code paths
If a compiler is used (say: SBCL, ECL, LispWorks, GCL, CCL, Allegro CL, mocl, ...) , then Lisp does not require runtime interpretation.
> some external programs called "tree shakers" remove the lisp's runtime
They don't. Treeshakers remove unused code and data. Parts of the runtime are still there.
> 5) Any piece of data can be potentially executed at runtime
If you remove the compiler & interpreter from the executable, that can't happen. See for example the LispWorks documentation on 'Delivery':
Not necessarily. Code in compiled Lisp implementations is machine code. If all fails, you can always compile Common Lisp to C code using GCL, ECL, MKCL, mocl, CLASP (C++).
> Lisp is so far removed from any concept of safety it even allows the programs to be modified at runtime
Only if you include an interpreter.
Actually one of the nice features of many Lisp implementations is that usually all data is tagged and that Lisp code checks applicability at runtime, incl. array/string/vector bounds checks...
> Keep in mind that a compiler like SBCL will put Lisp performancewise in the range of other statically compiled language/implementations.
Hmm...that claim is made a lot, and it works, in theory (see Jitterdämmerung[1]). However practical reports from non-advocates don't seem to bear out those results. For example, Michael Stonebraker says that they started writing Postgres in LISP. Had to revert that decision as everything was an order of magnitude slower[2].
> started writing Postgres in LISP. Had to revert that decision as everything was an order of magnitude slower
If they have never written a high-performance application in Lisp, they would have needed a bit more work. The Lisp version was written more as an experiment:
> Our feeling is that the use of LISP has been a terrible mistake for several reasons. First, current LISP environments are very large. To run a ‘‘nothing’’ program in LISP requires about 3 mbytes of address space. Hence, POSTGRES exceeds 4 mbytes in size, all but 1 mbyte is the LISP compiler, editor and assorted other non required (or even desired) functions. Hence, we suffer from a gigantic footprint. Second, a DBMS never wants to stop when garbage collection happens. Any response time sensitive pro- gram must therefore allocate and deallocate space manually, so that garbage collection never happens dur- ing normal processing. Consequently, we spent extra effort ensuring that LISP garbage collection is not used by POSTGRES. Hence, this aspect of LISP, which improves programmer productivity, was not avail- able to us. Third, LISP execution is slow. As noted in the performance figures in the next section our LISP code is more than twice as slow as the comparable C code. Of course, it is possible that we are not skilled LISP programmers or do not know how to optimize the language; hence our experience should be suitably discounted.
They had problems because Lisp alone needed 3 Megabytes of address space (!) to run - a 'gigantic footprint' - , needed to work around the GC and their C version was twice as fast.
Their main problem was that they wanted to write it in a mix of C with Lisp and it was hard to debug for them. I'm not surprised that people with no experience writing applications in a mix of C + Lisp failed to write a database engine. I'm pretty sure it is easier to write a database engine in C compared to a mix of C and Lisp, without GC and crossing the boundaries of a managed runtime and plain C data. People did that, but these people had quite a bit more experience writing such applications.
You can buy for example AllegroGraph, a graph database in Allegro Common Lisp:
"As noted in the performance figures in the next section our LISP code is more than twice as slow as the comparable C code."
I don't doubt that you can write productive systems in CL (bit of a fan myself), but the "oh we have smart compilers that just make it as fast as C" just isn't true.
> I don't doubt that you can write productive systems in CL (bit of a fan myself), but the "oh we have smart compilers that just make it as fast as C" just isn't true.
But 50% of C is great. Imagine in Lisp you can achieve 50% performance of C. That means you can write large amounts of your code in Lisp, without the need to use C.
Actually it is both better and worse. One can achieve better than 50% and in many cases one is slower than 50% of C. For example you can optimize your code at runtime using the Lisp compiler. But in many cases you will be slower, since C compilers and libraries tend to have better support for hardware.
For many many applications 50% performance of C is completely sufficient. Zillions of lines of code are written in much slower environments...
That basically means Lisp is probably about as fast as C because: C that is portable, easy to maintain, and robust against bad inputs also only runs at 50% of the speed of C.
If you take any fast C program and add all the logic that is missing to make it robust and secure, and refactor it so that it is halfway nimble against changing requirements, it's no longer running at the mythical speed of C.
I don't deny it, just that that expressiveness has a cost and Lisp advocates assume that cost is negligible, while in reality it is important(Lisp Machines were created just so that Lisp would run at acceptable speed)
The thing is, back in the 80s (before I was even born!), the cost was huge, but it has stayed the same until now. So, yeah, it cannot best good C/C++, or even optimized Java, but it still outperforms any scripting language.
You know what they say, the right tool for the right job.
But 50% (2x) was the lower bound of the slowdown, it was at least twice as slow (from that article), and in the Turing Award lecture Stonebraker mentions an order of magnitude.
Once again, I agree with most of your points, I just think the "we have magical compilers that make the overhead go away" (paraphrasing) is a bit too glib.
50% of speed isn't the whole deal. Latency and memory use matter too.
Audio,Video, Games, Networking,etc require minimal latency.
Memory use is also critical: when you data structures explode into GBs of RAM, a twice larger means you can run out of RAM and start swapping to disk, lowering performance substantially. Finally 50% of speed in isolated benchmarks with minimal GC use doesn't mean 50% inside a complex application, where GC can pause the current thread or steal time continuously.
I have a couple of Lisp Machines at home. They ran an object-oriented OS fully written in Lisp incl. window system, networking with TCP/IP, object-oriented database, text editor, graphics editor, 3d suite used by companies like Nintendo and Sony, mail server, file server, ...
That thing ran on a 5 MIPS (Millions instructions per second) 40 bit processor with 40 MB RAM in the end 80s / early 90s.
Suddenly with 1000 times more CPU speed, multi-core machines, fast GPUs, GBs of RAM, etc. it should not be possible to do that anymore?
Because suddenly Lisp is too slow? Why was it fast enough to run that stuff in the 80s on tiny machines?
Naughty Dog wrote Crash Bandicoot for the Playstation using a Common Lisp IDE and a specialized Scheme runtime for the Playstation. It sold millions of games and critics liked it for its fluid gameplay.
Suddenly with machines much faster than the first Playstation this should no longer be possible?
>Why was it fast enough to run that stuff in the 80s on tiny machines
Lisp Machines were invented precisely because Lisp was slow on common hardware, hence the 8 extra tag bits. Lisp Machines are specialized Lisp hardware.
Lisp Machines were invented as the first personal workstations, because the common hardware was timeshared PDPs. The researchers wanted powerful personal computers with interactive user interfaces - something which did not exist at that time.
Still that was not the question. The question was why 1000 times faster computers with much more space should not be able to run Lisp applications when in the 70s it was possible to run a whole OS on tiny machines.
I don't claim that "Lisp cannot run on modern hardware", just that it has performance sapping features that lower its competitiveness vs mainstream compiled languages.
If you don't care about performance and don't pretend to be in the same league as C i have no problem with Lisp/Scheme. Its the same deal with people pretending languages with GC such as Java are fit for systems programming and low-level tasks.
Java pretty much defeated the whole "the sky will fall if we deploy something with GC" nonsense. (Thank you, Java). What you're dealing with in this thread is twenty-year-old recycled FUD.
Your Apps (such as Java) are user-level programs. Its irrelevant if you can run Interpreted Brainfuck or Python on Crutches.
Read this comment:
https://news.ycombinator.com/item?id=14488427
But you ignore the whole stack the OS, Libraries, the Java runtime all are written in C/C++
only the end-user server app is written in Java code(that obviously means C/C++ is obsolete and Java is the systems language as fast as C/C++).
>Keep in mind that a compiler like SBCL will put Lisp performancewise in the range
Simple short benchmarks don't show a complete picture of the language, since they're easy to optimize
>written in Macintosh Common Lisp needed 20 MB RAM
At the time when computers had 32/64 MB of RAM.
>. You can use many Lisp implementations just like scripting languages.
>If a compiler is used (say: SBCL, ECL, LispWorks, GCL, CCL, Allegro CL, mocl, ...) , then Lisp does not require runtime interpretation.
>Only if you include an interpreter.
So which one is it?
>Treeshakers remove unused code and data. Parts of the runtime are still there.
>If all fails, you can always compile Common Lisp to C code
Compiling to C doesn't improve safety of Lisp. As a C programmer, "Compiling to verbose cryptic C" doesn't sell me as a feature, more like a potential security threat.
> Simple short benchmarks don't show a complete picture of the language, since they're easy to optimize
I didn't say anything about short benchmarks.
> At the time when computers had 32/64 MB of RAM.
Exactly. Phones now run with 4GB RAM. A Raspberry now comes with 0.5GB and more.
> Huh, What kind of unused code and data is there in a 25MB hello world?
debugger, inspector, compiler, interpreter, command line interface, debug info of those, ...
> Compiling to C doesn't improve safety of Lisp. As a C programmer, "Compiling to verbose cryptic C" doesn't sell me as a feature, more like a potential security threat.
You get rid of any runtime interpretation and compilation. All that remains is a static C program.
>All that remains is a static C program.
That what i mean in "verbose cryptic C". Machine-generated C is huge and hard to comprehend in its entirety, with recursion and continuations practically impossible.
It is just a step above from dis-assembled programs. C isn't some magic format that prevents bugs or ensures safety(in fact opposite of that, the larger the code the more bugs it will produce).
An example is GNU common lisp
Run the below fizzbuzz with /usr/bin/gcl -batch -c-file -compile $1(below file)
(declaim (optimize (safety 3) (debug 0) (speed 3)))
(defun is-mult-p (n multiple)
(= (rem n multiple) 0))
(defun fizzbuzz (&optional n)
(let ((n (or n 1)))
(if (> n 100)
nil
(progn
(let ((mult-3 (is-mult-p n 3))
(mult-5 (is-mult-p n 5)))
(if mult-3
(princ "Fizz"))
(if mult-5
(princ "Buzz"))
(if (not (or mult-3 mult-5))
(princ n))
(princ #\linefeed)
(fizzbuzz (+ n 1)))))))
If you recognize Lisp as a family of languages, then there is still newLisp which only adds some 100kb to a executable. And for a lot of stuff it's fast enough.
It's embarrassing watching you trying to argue, when it's more than obvious you haven't got the slightest clue of what you're talking about and probably never used Common Lisp in your life.
Rather than wasting your time propagating fallacies and outright lies, wouldn't it be better to get informed?
"Informed" about what? I don't blindly follow language evangelism.
Its the Lisp advocate job to prove their language is as great as they claim.
If for example a site provides an example of 25MB hello world, i can instantly estimate what kind of bloat i deal with and move on. I don't have to (declaim (optimize speed (safety 0))) and use a tree shaker to squeeze the bytes out of final executable just so that i can say SBCL (for example) is not a bloatware generator but these stupid users just forgot to add optimizations and tree-shake everything.
>That's non sequitur.
At the time Dylan IDE existed it regularly consumed 3/4 of all RAM. So bragging about it being only 20MB is like bragging your app only consumes 12GB of RAM today.
> 3. Arcane write-only syntax which becomes context sensitive with Domain-Specific Languages which Lisp programs devolve to, creating unreadable and overly condensed spaghetti code(especially macros), making existing code inaccessible for developers.
Do you have any evidence that this happens? I always see this as an argument against macros, but I don't actually see this problem manifesting in practice.
In fact, macros are much less scary than monkey patching like in Ruby or Javascript, because a macro can only affect the code within the particular invocation, and it must be imported to be used.
Also, the success of Rails seems to suggest that creating specialized DSLs can be a big benefit.
1.GC that prevents it from using real-time and performance-oriented code.
Lisp is a great language but it can't be the silver bullet for everything. If you need hard real-time programming you need to look elsewhere. The same is true of Ruby, Python, C#, and other popular, well-accepted languages.
2.Pervasive use of inefficient lists and dynamic typing,
which puts steep memory requirements on top of performance costs.
The same is true of Ruby, Python, C#, and other popular, well-accepted languages. And how can you speak about "inefficient" lists? Lisp uses linked-lists which is an efficient data type in the context of recursive functions, which Lisp coders make extensive use of.
3. Arcane write-only syntax which becomes context sensitive with
Domain-Specific Languages which Lisp programs devolve to, creating
unreadable and overly condensed spaghetti code(especially macros),
making existing code inaccessible for developers.
The whole point of creating a Domain-Specific-Language and then express the problem using the DSL is to, precisely, go away from unreadable code and spaghetti code.
It seems you have not used Lisp enough.
4. Huge runtime which is embedded in every executable, that
makes it less competitive with scripting languages.
Lisp requires eval() and runtime interpretation which other languages optimize out
and replace with fixed code paths. (some external programs called
"tree shakers" remove the lisp's runtime cruft, but they're exception rather than rule)
Common Lisp is generally faster than most of the scripting languages out there (Ruby, Python, Perl), and in reality approaches C speed...
As for "Lisp requires eval() and runtime interpretation", this is not true. You need to read more about the Lisp compilation model.
5. Lisp has very loose safety. Any piece of data can be potentially executed
at runtime and lisp programs have to be excluded from WriteXORexecute/NX protection,
since lisp code/data are intermingled. Executable stack and
heap expand areas for exploits and create a general lack of stability - any bug
can corrupt running code, Lisp is so far removed from any concept of safety
it even allows the programs to be modified at runtime(this is a fundamental
feature shared with polymorphic viruses).
"It even allows the programs to be modified at runtime". You mean allow the program to self-modify, right?
How horrible, how sinful!!
I guess we should stay with statically linked, statically compiled, statically typed, procedural-only, manually-memory-managed, code. I guess we should all go back to Algol-60 then.
> A random old-time Lisp hacker's collection of macros will add up to an undocumented, unportable, bug-ridden implementation of 80% of Haskell because Lisp is more powerful than Haskell.
How so? Can we program Haskell's Hindley-Milner based type checking into Lisp with macros and homoiconicity, and have it run during compile-time?
I've been dabbling in OCaml and ReasonML recently, and am enjoying pure functional programming a lot. But I never could get into untyped FP like the very practical Clojure before because they didn't seem to offer me too much compared to Ruby and Javascript.
Functions in ES6 are more first-class than Ruby's lambdas. The single-line function definition is very concise and you can also do your programming with immutable values if you are careful about references and use something like the immutable-helper. Granted you don't get persistent data structures, but then you have ImmutableJS now. ES6 implementations also have TCO, so a recursion-based programming model should be fine as well.
In that context, the differentiating power of Lisp seems to be around macros - a very powerful way to write programs that write programs. I have done my fair share of meta-programming with Ruby and I realized I really didn't want the power to change my program model at runtime. While they made certain things easier - like Rails's ActiveRecord (which is superb btw) - the "inception"-like indirection made things very hard to reason about.
For me the secret-weapon language is less about power, and more about structure. We can build equivalent things with any turing-complete programming language, but the test is when the system has to grow, and that is precisely when most codebases fail. So far, statically typed FP with Sum Types and Pattern Matching and Immutability seems to be the cure, and macros and dynamic typing its very antithesis.
> So far, statically typed FP with Sum Types and Pattern Matching and Immutability seems to be the cure
If by "seems to be" you mean "claimed but yet to be demonstrated by the style's avid fans". Beyond some compilers -- for which those languages were specially designed and are particularly suited -- there are precious few large, long-maintained software written in such languages, and the few projects out there don't seem to be doing significantly better than the rest. I'm not saying it's not a fine style, but so far it doesn't seem to be a cure, let alone the cure, to anything.
> How so? Can we program Haskell's Hindley-Milner based type checking into Lisp with macros and homoiconicity, and have it run during compile-time?
I don't see why not. Typed Racket does more or less that (it's not Hindley-Milner, but that's because they wanted a type system that matched Scheme idioms better): https://docs.racket-lang.org/ts-guide/
> I have done my fair share of meta-programming with Ruby and I realized I really didn't want the power to change my program model at runtime. While they made certain things easier - like Rails's ActiveRecord (which is superb btw) - the "inception"-like indirection made things very hard to reason about.
Lisp macros run only at compile time. You can just ask the REPL (or your editor) to expand a particular macro invocation for you to see what code is generated.
Of course, macros are still typically harder to debug than functions, but sometimes macros are the only way to abstract a certain pattern. That's why Template Haskell, CamlP4, OCaml extension points, etc. exist.
> So far, statically typed FP with Sum Types and Pattern Matching and Immutability seems to be the cure, and macros and dynamic typing its very antithesis.
I would argue that macros complement statically typed functional programming. Again, there's a reason for Template Haskell, CamlP4, OCaml extension points, etc.
> For me the secret-weapon language is less about power, and more about structure.
Your final statement is demonstrably incorrect. The world is full of large software projects not using statically typed FP and actually surprisingly little from the likes of haskell beyond self serving academic noodling.
> surprisingly little from the likes of haskell beyond self serving academic noodling
Your final statement is demonstrably incorrect.
Facebook (haxl), Microsoft (Bond), Google (ganeti), Target for supply chain automation, Bank of America (Haskell is being used for backend data transformation and loading), standard chartered, Prezi, Android bump (http://devblog.bu.mp/post/40786229350/haskell-at-bump), At&t (automate processing of internet abuse complaints).
The final statement the parent responded to was, "so far, statically typed FP with Sum Types and Pattern Matching and Immutability seems to be the cure." That Haskell can be used to write real software, and that it is, in fact, used by a few projects in the real world, is no demonstration of it being a cure to anything any more than Go's use in far more project is a demonstration of it being an even better cure. That those Haskell projects don't seem to do significantly better than others, does, however, seem to demonstrate that it is not a cure.
As I read it, the "self serving academic noodling", while too harsh an assertion (as some real-world Haskell users do enjoy working with it), does not refer to Haskell being useless, but to Haskell not seeming to substantiate the strong claims made by some of its fans; at least not in any striking, self-evident way. Haskell is a fine programming language, but touting it as the cure to our problems is a bit premature.
> In that context, the differentiating power of Lisp seems to be around macros - a very powerful way to write programs that write programs. I have done my fair share of meta-programming with Ruby and I realized I really didn't want the power to change my program model at runtime. While they made certain things easier - like Rails's ActiveRecord (which is superb btw) - the "inception"-like indirection made things very hard to reason about.
The difference between metaprogramming in Lisp (lisp macros) and metaprogramming in a language like Ruby, is that writing macros in Lisp is really, really easy.
It is simply not much harder than writing a normal function.
Thus, while in other languages, doing metaprogramming is a special affair, in Lisp it is a piece of cake, and nothing special at all. Lisp also allows you to "expand" the macro to see what the macro would do when called. This is also very easy to do.
So, i do agree with your point, however let me rephrase it:
One of the most important differentiators for Lisp is really easy-to-write metaprogramming.
For me the problem with Lisp is that it's just ugly to look at. Compare some Lisp code to an equivalent in Python, just in terms of the raw visual appeal of the text. The parentheses and indentations obscure the logical meaning of the code.
It's interesting you mention Python, the only major whitespace-sensitive language. Many people have strong opinions about that, too, but like parens, it's not really relevant.
Any decent Lisp editor will do automatic indenting, so you can still look at tab depth for meaning.
You could always try Clojure, which swaps out () for [] and {} for various things (function arguments, bindings, array literals, map/set literals, etc.)
However once you start learning Lisp and writing your self programs, it becomes fairly easy to read not only your code but others' code.
> The parentheses and indentations obscure the logical meaning of the code.
Indentations have no effect in Lisp, they are only there to make code more readable.
Parentheses are there to clearly tell you which statement is acting over which other statement, so they are there to give you a clear light about the meaning of the code.
Visual appeal can be just fine, as long as the indentation is well-applied. The same is true for widely used languages like Java, C, C++, C#, and Javascript. Wrongly indented C code can be frustrating to read.
Have this small, silly snippet of Common Lisp code and take a look. You should understand easily what each statement is doing.
So here we DEFine a class called "my-stack". With two members (or "slots") called "name" and "the-stack". "Name" will be an initial optional argument, with default value ("initial form") "<Unnamed stack name>"
(defmethod into (item (s my-stack))
"Push an item into our stack"
(with-slots (the-stack) s
(push item the-stack)))
Here we DEFine a silly method which pushes an item into an object S of type "my-stack". The explanation "Push an item into our stack" follows.
Later what we say is, that, using the slot "the-stack", perform the following code: Push an item into the stack. Those two lines are almost English.
(defmethod sum-all ((s my-stack))
"Return the sum of all items in our stack"
(with-slots (the-stack) s
(loop for item in the-stack
summing item)))
Here I show you that code can be really straightforward to read. Here to sum all the stack i use the following loop instruction:
"loop for item in the-stack summing item"
... and it does exactly what it seems to be doing. It's just as if it was plain english!
So one can write highly readable Common Lisp code if one wants to.
> Python and Lisp coder here. At first look, Lisp is ugly to look at.
Maybe for you; Lisp was great for me to look at from the moment I laid eyes on it.
Beauty is very subjective; but the technical advantages or disadvantages of a syntax aren't.
Some years prior to becoming a Lisp programmer, I had one tiny interaction with Lisp that wasn't terribly good. It was this: some CLOS example code in a book on OOP by Grady Booch. I was trying to follow the syntax and it seemed to have weird repetition in it. That was arising from the defclass syntax for declaring accessors and initforms, like this (foo :initarg :foo) and (bar :accessor bar) type stuff. I was boggled by all the repetitions: why do we have foo and :foo, and so on. I had no clue about Lisp keyword symbols at that time.
I didn't hate the syntax, and it seemed less verbose than C++, for sure. The parentheses in it seemed perfectly fine. It was just boggling a little bit: more so than something like (let ((x 2) (y 3)) (+ x y)), or (loop for x below 10 summing x).
> I was boggled by all the repetitions: why do we have foo and :foo, and so on.
For what it's worth, here is my "defclass-easy" macro which will create a class with the slots you want, it also has a "required-slots" list as a parameter. Slots that are in both "slot-list" and "required-slots" will have an ":initarg" of the same name as the slot, and will raise an error if not supplied when making an object.
Enjoy!
(defmacro defclass-easy (class-name slot-list required-slots)
;;required-slots should be a list as well
`(defclass ,class-name ()
,(mapcar (lambda (x)
(if (null(member x required-slots))
(list x)
(list x :initarg
(read-from-string (concatenate 'string ":" (string x))) ;creates :initarg :<slot name>
:initform `(error ,(concatenate 'string "initarg required: " (string x)))))) ; (error "initarg required: xxxx")
slot-list)))
It's interesting to note that Lisp has some "def easys", just not for classes. There is a defsetf for simple situations, so you don't have to use define-setf-expander. defsetf's simplicity isn't one-size-fits-all either; it has a short form and a longer form. In the same area of functionality, there is define-modify-macro which handles simple cases, freeing the user from calling get-setf-expansion and dealing with its return values.
>Here I show you that code can be really straightforward to read.
Disagree 100%. That code is incomprehensible to me. And I don't think it's just because I haven't written Lisp in a long time. I can look at code in other languages that I've never used (eg Haskell) and it makes sense.
Part of the problem, of course, is that there are no type annotations. But even other languages without type annotations are easier for me to read.
(defmethod into (item (s my-stack))
"Push an item into our stack"
(with-slots (the-stack) s
(push item the-stack)))
In above S is of type/class MY-STACK. ITEM is of class T, because there is no limitation defined.
You could define a method for items of type NUMBER:
(defmethod into ((item number) (s my-stack))
"Push an item into our stack"
(with-slots (the-stack) s
(push item the-stack)))
> I can look at code in other languages that I've never used (eg Haskell) and it makes sense.
That's unlikely. There is zero chance that somebody not knowing Haskell can understand any non-trivial Haskell code. Haskell has so many different concepts and a different notation including type inference and complex types, that it is difficult to understand without prior Haskell knowledge.
I liked Lisp when I played with it, but realized that when it came to things I shouldn't* play with I wouldn't be able to use it for any serious project. Seriously even lacking a socket api is a sad state for a language (if I want to look at multiple someone else's interesting code that actually does something how many low level communication frameworks am I going to need to know).
*as in finical, security, encryption type of problems (that each require a decades worth of training/learning to start to get right) that can ruin life's and companies.
Common Lisp has multiple "socket apis", most of them well-documented, widely used, well-tested and robust.
Your other point about "finical [sic], security, encryption-type problems" I do not understand at all since you do not go into any details.
I've noticed that a lot of people will claim a lot of things when it comes to Lisp, but it is extremely rare that these claims have any merit whatsoever. More than often enough, those making the claims are obviously completely clueless and have never spent more than a couple of minutes [if that] looking at Lisp. Yet they feel confident making all sorts of false proclamations and spreading lies and misinformation.
> Programs written by individual hackers tend to follow the scratch-an-itch model. These programs will solve the problem that the hacker, himself, is having without necessarily handling related parts of the problem which would make the program more useful to others. Furthermore, the program is sure to work on that lone hacker's own setup, but may not be portable to other Scheme implementations or to the same Scheme implementation on other platforms. Documentation may be lacking. Being essentially a project done in the hacker's copious free time, the program is liable to suffer should real-life responsibilities intrude on the hacker. As Olin Shivers noted, this means that these one-man-band projects tend to solve eighty-percent of the problem.
I guess this is true for all programs in all languages. Distributing anything needs lots of care to following community practices and to possible edge cases stemming from incompatibilited in people's setups.
He writes a seductive argument, but it isn't very satisfying to conclude "this language is so powerful that it defeats itself". Likely there are other, more mundane reasons why the theoretical purity of the lisps isn't useful in practice.
For example, the endless (bracketing). My understanding is that the power of lisp macros comes from this questionable, but distinctively lispy, syntax. And I say questionable compared to natural human scripts - none of them work from inside to out. I've seen left to right (eg, English), right to left and up to down (eg, Chinese). If there were some better known examples of what lisp macros were good for then maybe it would get better traction. But lisp macros don't seem to have an example of why they are worth that trade.
Also, in the early days, C was a pretty close mirror to the underlying code execution - there was probably less mental overhead to squeezing out better performance.
If there were some better known examples of what lisp macros were good for then maybe it would get better traction. But lisp macros don't seem to have an example of why they are worth that trade.
FYI, Paul Graham's book, "On Lisp" covers macros in detail and is available as a free download here, if you are interested in the topic:
On Lisp is a comprehensive study of advanced Lisp techniques, with bottom-up programming as the unifying theme. It gives the first complete description of macros and macro applications. The book also covers important subjects related to bottom-up programming, including functional programming, rapid prototyping, interactive development, and embedded languages. The final chapter takes a deeper look at object-oriented programming than previous Lisp books, showing the step-by-step construction of a working model of the Common Lisp Object System (CLOS).
As well as an indispensable reference, On Lisp is a source of software. Its examples form a library of functions and macros that readers will be able to use in their own Lisp programs.
Lisp is no more 'inside to out' than any other programming language, it just uses the same syntax for each subclause where other languages randomly choose different syntaxes for different kinds of expressions, so C
> that the power of lisp macros comes from this questionable, but distinctively lispy, syntax
No. It comes from homoiconicity. In this case, everything is a list. Because Lisp has great support for modifying lists, you can do some wonderful things with it. Wisp [1] comes without parentheses, but still has the same macro system, and the same power.
> If there were some better known examples of what lisp macros were good for
When the language gets in your way. That's it. Everything else can be a lambda. Lisp macros only start to seem pointless, because Lisp itself is a flexible language.
But, some real world macro examples?
Mutating functions. (setf and the like.)
Compile-time contracts. [0]
You have a key-value list that you dynamically constructed, and want to bind it. Let can't just take a list, but a macro can handle it.
Killing boilerplate, that involves bound values. (This one spoils you. Other languages become infuriatingly inflexible about it.)
[0] Can't find the paper right now, but it relied on compile-time macros to validate types at compile time in a (large) subset of Common Lisp.
Let me give you a simple example of the usefulness of macros.
Consider _any_ boilerplate code that gets repeated on your application. You know how boilerplate code is not only tedious to write, but also creates a maintenance headache whenever you need to modify 'the boilerplate' and thus go copy-paste again through a lot of the code.
Macros totally destroy boilerplate. They reduce boilerplate to exactly zero.
The other usefulness is that, since they manipulate source code, they are more powerful than functions. That is, the macro can be seen as defining an ultra-powerful version of the function. This enables you to create functions (actually, macros) that do more than functions in other languages.
Here is a simple example from Conrad Barski, who has a tutorial where one creates a text-only adventure. This code defines a specific action the user can perform. In this case, to "weld" a "chain" to a "bucket".
(defparameter *bucket-filled* nil)
(defun weld (subject object)
(cond
((and
(eq *location* 'attic)
(eq subject 'chain)
(eq object 'bucket)
(have 'chain)
(have 'bucket)
(not *chain-welded*))
(setf *chain-welded* 't)
'(the chain is now securely welded to the bucket.))
(t
'(you cannot weld like that.))))
The code defines action "weld" for a subject and an object, but just specifically, for the subject that is "chain" and the object that is "bucket", at the location "attic".
As you may infer, each time a new "action" needs to be defined (for a subject, object, in a specific location), a new function will have to be defined and this function will look pretty similar to the above. So after four or five actions you define, you will have lots of unnecesary boilerplate code.
Enter macros. The programmer understands there is a "pattern" there, and creates a macro "game-action":
(defmacro game-action (&key command subj obj place code)
`(defmacro ,command (subject object)
`(cond ((and (eq *location* ',',place)
(eq ',subject ',',subj)
(eq ',object ',',obj)
(have ',',subj))
,@',code) ;; expand the rest of the code
(t '(i cant ,',command like that.)))))
What this macro does is abstract this repetitive boilerplate and formalizes it. So now one can define the same "weld" game action like this:
(game-action :command weld
:subj chain
:obj bucket
:place attic
:code
(cond ((and (have 'bucket) (setf *chain-welded* t))
'("the chain is now securely welded to the bucket."))
(t '("you do not have a bucket."))))
As you can see, now:
a. the definition of a game-action is much more easier to understand,
b. boilerplate is reduced (it can be reduced even further in fact)
c. and thus you can define more game actions in a more brief way.
d. The code that verifies if the subject and object do exist, and if the place is correct, is abstracted away, so you can switch to different data structures in the future, yet your game-action definitions will need no change. Unlike the 'boilerplate' version which will need manual edition of all the game-action definitions.
You can apply macros over macros, so there is no limit on the power of them.
>For example, the endless (bracketing). My understanding is that the power of lisp macros comes from this questionable, but distinctively lispy, syntax.
The endless bracketing is nothing more scary than the endless {} and () in a C, Java, C++, JS etc program.
> If there were some better known examples of what lisp macros were good for
If there were better examples of what oscilloscopes are good for, every home would have one.
Macros are a tools for programming the programming language, which makes them somewhat specialized.
Good examples of macros are the ones which already exist; they are there for anyone to study: for instance the macros of the Common Lisp object system, the Common Lisp LOOP macro and such.
> questionable compared to natural human scripts
No successful, mainstream programming language follows human scripts.
Even though human scripts appear linear, just laying words down in order, the underlying syntax is anything but linear.
The syntax is very difficult and full of ambiguities that need a great deal of advanced semantic context to unravel. It is a poor model for designing programming languages.
Programming languages designed to imitate natural language
and scripts have been failures at best, and programmer laughing stocks at worst.
> none of them work from inside to out.
Most common, mainstream programming languages have expressions that work from inside out and so does math notation.
GNU Make: $(foreach F,$(SOURCES),$(addprefix foo/,$(F)))
XML: <foo><bar>datum</bar></foo>
> eg, Chinese
Chinese characters, though, are built from radicals and components in a process that can is basically basically inside out. The elements can stack left-right or up-down, or be fully or partially enclosed, in what amounts to de facto a recursive generative process.
> And I say questionable compared to natural human scripts - none of them work from inside to out.
That's irrelevant. A better comparison would be to other programming languages, and in this case you'd notice that all popular languages are "inside-out". Consider, what's the order of evaluation in the following Java code?
This is a great lesson for any language designer. The quality of the language isn't measured by inherent properties. Like any product, its quality is measured by how it interacts with the constraints, requirements and dynamics of the industry it serves.
> Unless they pay thousands of dollars, Lisp hackers are still stuck with Emacs.
Still true today. SBCL + slime (or slimv for vim) make a decent IDE for my shell terminal, but it's nothing for mouse pushers and a far cry from a LispM.
On the other hand, adding object orientation to C requires the programming chops of Bjarne Stroustrup
I'm not sure this is true. You can get basic OO going on C in an afternoon with structs and function pointers. That is a very, very common technique in graphics programming (or was, before C++ optimizers got any good). ObjC is also a thing. The achievement of Stroustrup - and this is no minor thing, so don't think that I am discrediting him in any way - was he got everyone else to buy into his vision for a particular way of doing it.
Incidentally I recently attended a talk given by Stroustrup and he is a super nice guy and certainly doesn't think of himself as a genius.
I watched Stroustrup be hideous to a member of the audience at microsoft's going native conference 2-3 years ago.
"I understand what you are trying to ask and I think that you are wrong." -- or something so close to that as makes no difference. While the audience hooted, laughed and razzed.
Right then it became clear to me why the C++ community is so awful. It starts with Bjarne who is /not/ a nice guy. Every other language community I've dealt with, C, Perl, Python, Scheme is more pleasant than C++ with their annointed "smart people" who stand in their circle while failing to discourage everyone else who, well, acts exactly like Bjarne did above. And just to be clear, Bjarne gave a lecture in which he pointed out linked lists are awful on modern hardware. The question was, "Shouldn't the standard library make the right decision for you if one is always wrong?" Whatever you think of the question or the appropriate response Bjarne was just being nasty and enjoying it like a school bully.
Yes and no. I would define a genius as someone who makes an intellectual leap that noone has made before. Stroustrup is a smart guy who just works very, very hard. As such he is an inspiration because he shows what an ordinary person is capable of if they work hard enough. Just my opinion of course.
Geniuses are often people who work at something for a long time until they slowly accumulate the knowledge and experimental results to make such a leap. After all, Lazy Geniuses don't achieve much, even if they have the potential to.
Interestingly, I think this is related to Go's success, much to the chagrin of its functional-programming detractors. While there might be a dozen or more ways to write a subroutine in Haskell or Lisp, there is more or less one way in Go. Further, the route in Go is probably not as elegant (according to popular opinion, anyway) than in functional languages, so fewer are tempted to improve upon it. That said, contrary to this article, Go isn't harder to develop in than Lisp; it's likely easier because there is one way to solve any given problem, and that way is usually quite straightforward. This isn't meant as a "Go is better than Lisp" post; I can appreciate each for its strengths. This is just marveling at the nuances of psychology and it's impact on the technical domain.
Similar things were said in favor of Python over Perl, back in the day. Python had the "there's one way to do it" motto, directly in opposition to Perl's "there's more than one way to do it".
In reality, one only need look at the many different package management attempts to see how Python failed at living up to its own aspirations. If more evidence is needed, take a look at the packages themselves, and you'll see lots and lots of duplication of effort and incompatible ways of doing the same thing.
Perl's CPAN was widely reviled by Python fans for having lots of cruft collected over the decades, and not knowing if the package one uses is the "right way to do it", and Python's "one way to do it" approach was supposed to be a solution to that. Well, it didn't work out that way. There are now tons of crufty Python packages, with similar quality control issues.
I think this is merely a result of a language becoming popular and having a robust community of developers. Developers like to re-implement things and write them from scratch, whether just for their own education or amusement, or for geek cred, or to look good on a resume, or because they finally want to write it "the right way" (of which everyone seems to have a different opinion on). Then they abandon packages when they're no longer interested or when something else takes priority. Happens all the time.
If Go hasn't had something like this happen to it yet, it could be because of some visionary design that somehow avoided these issues, or it could be that it's just not as old and popular yet.
Regarding Go and package management, I'd say that Go doesn't have that issue yet because there really is no package management outside of using mercurial or got repositories. They are working on vendorizing those in a one way fashion and expect, as a Go programmer, that there will be am explosion of drift in due time. As you stated, "I think this is merely a result of a language becoming popular and having a robust community of developers." I couldn't have put it more succinctly.
Go simply isn't as flexible as Python. It's statically typed with no generics, so there is no "for loop vs map/filter/reduce" or "imperative vs functional" conversation to be had. This is categorically different than Python vs Perl, because Python's simplicity boasts were based on the discipline of the community as opposed to technical rails, as in Go's case.
I'm surprised that it still gets so much attention. Every few months, someone volunteers to translate it into another language (see the bottom of the page).
I wrote that essay five years ago. Nowadays, just use Clojure or Racket and ignore what I've written.
And yes, I know, I really need to update the design of the site. I wrote it when I was still a rookie web developer. I'm starting a new job so I'll redesign my site in my copious free time.