Hacker News new | past | comments | ask | show | jobs | submit login

I am new to programming. What is it so special about Lisp that I keep hearing about it in Hacker's News (often in Machine Learning and from old programmers)?



You will get different answers from different people. There doesn't appear to be any commonly accepted universal truths in Lisp (otherwise we'd all be programming in some variant of it by now). However there are plenty of good ideas which many more popular languages have adopted.

What made it special for me was discovering the link between symbolic computing and the lambda calculus. The kernel at the center of every Lisp is a substrate upon which languages are built. eval knows nothing about your program but can read some Lisp forms and turn itself into a machine that can execute it. This is immensely powerful. From a shockingly small number or primitives you could build, theoretically, any language you need.

The more superficial qualities, for me, are a distinct lack of frivolous, obfuscating syntax; its uniformity; and its direct representation. I don't have to maintain an exhaustive mental mapping of syntactic digraphs and operators, I don't have to remember precedence rules, I don't have to remember the grammatical distinctions between expressions and statements, and I certainly don't have to maintain a running interpreter/compiler in my head. Armed with a simplified mental model of the lambda calculus such as the substitution method is enough to work out what any program is doing (and indeed due to the nature of Lisps is also easy to interactively verify).

Every other language I've worked with requires memorizing a laundry list of special cases and weird rules (ie: how array and pointer parameters are changed by the compiler in C, precedence rules as mentioned, etc) imposed for efficiency's sake and often maligned with good engineering practices. Some strange monkey-brained part of me fools myself into believing I am becoming a better programmer when I pollute my mind with these things... but I've learned over the years that it's just a trick. The ideas are important and the implementation is just the painful cost we pay to make them reality.

Lisp just happens to have the least cognitive load in my case.


I agree with everything, though the idea of least cognitive load I think only runs skin deep (syntax).

This helpful for small scripts which need to be scanned quickly, but anything non-trivial will already have a significant layer of abstraction which takes time to parse.

Lisp people love their macros. Clojure has fancy data structures and control flow. It will take time to understand these things whether its in lisp or sandskrit.


> the idea of least cognitive load I think only runs skin deep (syntax)

Perhaps... I'm not aware of any empirical study into the matter so my claim is mere speculation.

There are, for example, plenty of highly productive Perl and Haskell programmers. Those languages are notorious for the gobs of arcane syntax. And yet their proponents claim it's an advantage.

"Cognitive load," in my case refers to the amount of information about the language and its compiler/runtime I have to recall in order to estimate how a given piece of code will execute.


Enough use of macros means cognitive load for even simple code can be massive though. You can't really know what the result will be, especially if the entire program can be rewritten in some cases by a macro. If you constrain their use enough, sure, you will not run into these problems. But in any language it comes down to cognitive load vs expressiveness, in how you chose to wield it. Generally, more dense code requires more thinking to understand, and the resulting complexity has to do with the underlying structure, rather then the particulars used to represent it.


> There are, for example, plenty of highly productive Perl and Haskell programmers. Those languages are notorious for the gobs of arcane syntax.

I don't see anything arcane about Haskell's syntax. There are no surprises in vanilla\* Haskell syntax - no cruft in the syntax of expressions, no complicated syntactic sugar, you can see what is and isn't an infix function at a glance, same with type constructors (capital letters), etc.

What might be arcane is some peoples use of user-defined infix operators. But I don't know if I would lump that in with ''Haskell's syntax'', since that isn't part of the grammar of the language. But YMV.

\* I can't speak for GHC extensions or template Haskell


Yeah, and it's important to remember that programming languages are for humans, after all. So if this paradigm is easier to reason about and makes you more productive (than coding in something imperative like C or Java or Python), then that makes it a great language for you.


Basically Lisp is based on a very different paradigm than most mainstream languages. So if you know Lisp and program in something else, then half of the time you are reminded that the current problem would be really easy in lisp.


Homoiconicity is a neat thing when you just get started, although there's some interesting discussion around that subject here - http://calculist.org/blog/2012/04/17/homoiconicity-isnt-the-...


Thanks for sharing. Interesting discussion.

I think the author confuses the concept (homoiconicity) and it's practical consequences. I think this quote from the comments below summarizes it:

    > But the fact of the matter is that programmers interact with text, not with data structures.
    
    Not Lisp programmers. It's trivial to implement an environment where code will be represented graphically and won't be ever represented as text.
    
    That's, actually, the point: language semantics isn't tied to textual representation.


Simplicity and Homoiconicity. Its an all around elegant language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: