Hacker News new | past | comments | ask | show | jobs | submit login
Why Racket? Why Lisp? (practicaltypography.com)
418 points by Tomte on Aug 21, 2014 | hide | past | web | favorite | 280 comments

I dislike this notion that Lisp (or Haskell or OCaml or ...) owe it to everyone else to explain and enunciate why it can be more productive to use Lisp.

""" That’s ask­ing too much. If Lisp lan­guages are so great, then it should be pos­si­ble to sum­ma­rize their ben­e­fits in con­cise, prac­ti­cal terms. If Lisp ad­vo­cates refuse to do this, then we shouldn’t be sur­prised when these lan­guages re­main stuck near the bot­tom of the charts. """

What?? Why? The problem is not and has never been communication of Lisp features. No one made a concise list of why C and Java are so great that people rushed to use them. Instead, they were pervasively used and taught in universities, and they are pervasively used in the development of most applications for e.g. Windows and Linux, and they are relatively simple languages (in theory) whose semantics most people "get". No wacko high order crap, no weight curried things, no arrows or morphisms or monads or macros.

Programmers of such languages don't owe the rest of the world anything. Everyone has a choice about what to use, and it's each individual programmer's responsibility to choose them wisely. There is plenty of material about Lisp and Scheme out there. Unfortunately, we are in this TL;DR culture where no one has the time to spend a few hours every week to learn something new, since somehow that's too big a risk on their precious time.

Now, for some comments:

1. Everything is an expression.

He says this is a boon, but it's also confusing for "expressions" which are side effectful. Too bad he did not talk about that, nor did he talk about how the expression-oriented way of thinking is really best for purely functional languages that allow for substitution semantics.

2. Every ex­pres­sion is ei­ther a sin­gle value or a list.

This is wrong, unless we devolve "single value" into the 1950's idea of an "atom". What about vectors or other literal composite representations of things? What about read-time things that aren't really lists or values?

3. Functional programming.

Functional programming is indeed great, but why don't we talk about how in Lisps, we don't get efficient functional programming? Lisp has tended to prefer non-functional ways of doing things because Lisp will allocate so much memory during functional programming tasks that for many objectives, FP is far to inefficient. Haskell solves this to some extent with things like efficient persistent structures and compilation algorithms such as loop fusion. Lisp doesn't really have any of this, and the data structures that do exist, many people don't know about or use.

4 and 5 don't really have to do with Lisp but particular implementations. That's fine I guess.

6. X-pressions.

What the hell is an X-pression?

7. Racket documentation tools.


8. Syntax transformations.

He made the same mistake as he so baroquely outlined at the start. What in the world are these "macros" and "syntax transformations" good for? You're just telling me they're more powerful C preprocessor macros that can call arbitrary functions. But I was taught that fancy CPP code is a Bad Idea, so boosting them is a Worse Idea.

9. New languages.

Same problem as 8. You say it's useful but you don't say why. Just that it's "easier".

10. Opportunities to participate.

Nothing to do with Lisp again.

* * *

Instead of all this glorifying of Lisp and etc, why don't we spend time increasing that library count from 2727 to 2728? Or do we need to go through an entire exercise about whether that time spent is worth it or not?

""" Rather, you are—be­cause a Lisp lan­guage of­fers you the chance to dis­cover your po­ten­tial as a pro­gram­mer and a thinker, and thereby raise your ex­pec­ta­tions for what you can accomplish. """

You're repeating everyone else. Notice how difficult it is to convey such things without being hugely abstract and unhelpful? Why don't other programmers see this huge productivity benefit from these Lisp wizards in their day-to-day life? Where are the huge, useful applications? They all seem to be written in C or C++.

""" It’s mind-bend­ingly great, and ac­ces­si­ble to any­one with a mild cu­rios­ity about soft­ware. """

It is accessible to those who are intently curious about theoretical aspects of software development, especially abstraction, and who can take exercises which require mathematical reasoning. A "mild curiosity" in my experience with others will not suffice.

* * *

This post may sound somewhat cynical and negative, but Lisp enlightenment articles are almost as bad as Haskell monad tutorials. They're everywhere and by the end, still no one gets it. And I don't like the attitude that because a group G doesn't understand it, and group H does, that H owes it to G to spoonfeed the information. That's not the case.

A couple comments:

> I dislike this notion that Lisp (or Haskell or OCaml or ...) owe it to everyone else to explain and enunciate why it can be more productive to use Lisp.

There are a couple reasons why I think this happens. The first is that people who use Lisp (or any other technology) will rightfully get excited about using it. Sometimes this translates into trying to evangelize it to other people, which is not a bad thing.

The second reason is that Lisp has a long history of 1) incubating major contributions to the field and 2) being continually marginalized by commercial practice. In many cases, this marginalization is warranted... there have been many good reasons not to use the language. In that kind of environment, it's easy to see enthusiastic users of Lisp getting a little defensive and writing apologetics about their tooling.

> What the hell is an X-pression?

He describes it, in some detail, directly in his post.

> Opportunities to participate. ... Nothing to do with Lisp again.

I do think this is an area where Lisp has unique advantages. Good Lisp systems tend to have good online documentation, good access to the underlying source code, and good interactive facilities for modifying a system.

Just as an illustration, I tend to think that Emacs couldn't really work as a closed source software package, and I don't think it would be nearly as powerful as it is with a different language than Emacs Lisp. (Eclipse is also open source, written in a very different language, and not nearly as easy to extend as Emacs.)

> Lisp doesn't really have [efficient persistent data structures]

It does now: http://www.ergy.com/FSet.html

On the syntax transformation side....

I have started to get really into what you can do with closures and coderefs in Perl. It isn't a macro in the lisp sense but it really works well and you can do some incredible things with it, and with closures and lexically scoped variables....

I suppose I should probably at some point blog about it.

I think a big part of the problem is that these are not tools you had to a beginner. They are things which take a lot of time to really master.

It benefits both groups to to enlighten G. Since G is so much larger, that's how we would increase the library count from 2727 to 27280.

The best way to demonstrate it is to write interesting software, not to flounder on some abstract explanation about the greatness of G's favorite language.

The original author of the blog post _did_ write some interesting software in Lisp, used it to write a book, and then he wrote his blog post.

To me this is a good example of something languages have to have to make progress: users and evangelists/educators, with a large overlap between the two.

Great post.

> It is accessible to those who are intently curious about theoretical aspects of software development, especially abstraction

Agreed. That's why I think it's disingenuous when people complain that Haskell folks are in an "ivory tower" or they think they're the elite. I just think they're more curious and that's why they learned it.

> And I don't like the attitude that because a group G doesn't understand it, and group H does, that H owes it to G to spoonfeed the information. That's not the case.

Exactly. I always think 'I should write this and that tutorial and then people will finally see why FP is so much easier and then perhaps...' but then immediately I think 'but I learned fine with the existing tutorials, why can't others?'.

The answer is, because most people don't care or aren't curious enough. The moment I learned of Lisp and Haskell and people gushing about it I knew I had to learn them just to see what the fuss is all about, and whether people are right in their praise. And they are right in their praise.

It takes time to learn but so does everything (for me, at least), but in the end you invariably get there, and it's more than worth it.

From what I've seen out of the Clojure community over the past few years, it seems like they're far more likely (and able) to offer up concrete examples of how Clojure makes their businesses and products successful in a way that an imperative language could not. So, yay Clojure community, and boo on hand-wavy Lisp people.




> ... in a way that an imperative language could not

Perhaps a nitpick, but sometimes nomenclature is important. Clojure (like all Lisps) is an imperative programming language. "Imperative" languages are contrasted with "declarative" languages[1]. Where does functional fit in and what is the name for "non-functional"? Well, that's not clear, as "functional" (in the PL sense) doesn't even have an agreed upon definition. Is Clojure "functional", then? Well, it's certainly not pure like Haskell is (and it has explicit loops etc.), but it does encourage more uses of function composition than, say, Java, it's more declarative, and it also isn't object-oriented, but that doesn't make it not imperative. Actually, I think even Haskell still qualifies as imperative.

But Clojure can certainly be contrasted with OO and procedural languages (even though at least one of its core abstractions -- protocols -- is borrowed from OO).

[1]: http://en.wikipedia.org/wiki/Imperative_programming

Taken to the extreme like you are doing, no language could ever be considered "functional" because in the end, they all need to perform io and ordered execution to be of any use. Instead, the commonly understood meaning of the word "functional" is that the language's design emphasizes and encourages use of immutable data and Clojure fits that meaning.

>Instead, the commonly understood meaning of the word "functional" is that the language's design emphasizes and encourages use of immutable data and Clojure fits that meaning.

I would say a language which relies on functional composition as its primary abstraction is "functional." Immutable data is a side effect of that.

I would say that immutable data is a prerequisite for doing functional composition and languages that don't have immutable data-structures in their standard libraries are not used as FP languages.

"They all need to perform IO" - why does a programming language need to perform IO?

Haskell does not have a function that performs IO. For instance, `putStrLn "hello world"` returns an IO action, but it does not print anything to the screen.

In order to "perform IO" you have to name an IO action 'main'. Then the compiler understands that you want that action to take place in the Real World. But Haskell does not know about the Real World and so it cannot do IO.

This difference is important, otherwise there is a misunderstanding that Haskell is just "minimizing IO". It's not - there's no IO.

It is meaningless to nitpick about whether a language is declarative or not since the term "declarative" lacks a good definition. Robert Harper, a programming languages researcher, lists 6 separate definitions and concludes, "It seems to me that none of them have a clear meaning or distinguish a well-defined class of languages. " [1]

[1] http://existentialtype.wordpress.com/2013/07/18/what-if-anyt...

>Clojure (like all Lisps) is an imperative programming language. "Imperative" languages are contrasted with "declarative" languages.

Wikipedia, and probably everybody, defines truly functional programming languages (which includes Clojure and the other Lisps) to be declarative, not imperative. [1]

[1] http://en.wikipedia.org/wiki/Declarative_programming

> truly functional programming languages (which includes Clojure and the other Lisps)

This is wrong. I don't know how to say this... Lisps do usually support functional programming paradigms, but most Lisps are indeed anything but functional (in the ML sense).

Lisp was considered a functional programming language before ML existed. Lisps generally don't have as strong and exclusive support for the functional paradigm as some newer languages (and pretty much anything else still in use is newer than lisp), but then C++ is known as an OOPL despite having less strong and exclusive support for the OO paradigm than many older languages, so there's that.

Lisp is a multi-paradigm language. It can be functional, but it doesn't have to be. It can be OO, but it doesn't have to be. Classifying it as "being" any of these is asking for argument, unless you have very clearly defined (and agreed-upon) definitions of what it means to be a member of one of those categories. In particular, does membership mean that you require programming in that style, or only that you allow it?

Lisp was considered a functional programming language before ML existed.

That was then and this is now. Today, functional programming means side-effect-free with a strong type system, and efficient use of higher-order abstractions over functions, including things like typeclasses to define entire morphisms given a few functions which establish relationships between objects. Lisp doesn't really support this effectively. Arguably, with concepts C++14 is more functional than Lisp.

> Today, functional programming means side-effect-free with a strong type system

Functional programming has always meant the style or paradigm of programming that prefers side-effect-free declarative code that can be modelled in the substitution model; type systems have always been (and remain) orthogonal concerns. A functional language is one that supports the functional paradigm, but it has never been most commonly used for languages that exclusively support that paradigm. Pure functional code is the term that has been used for code that is exclusively in that style, and pure functional language is the term that is generally used for a language that has exclusive support for that style.

I'm not sure I see what having "a strong type system" has to do with whether or not a language can be considered functional. It seems like it's just a way to say that Haskell and the MLs are the only languages that qualify. And that sounds more like PR than a formal definition.

It's also not really historically accurate. People were referring to languages based on the untyped lambda calculus as "functional" before things like Hindley-Milner appeared (in the 70s). Were they (and I'm talking about PL researchers) all speaking incorrectly?

That's a very precise definition of the class of functional languages. Unfortunately in the real world the definition tends to be much more varied. ;)

I don't think everyone uses that definition of declarative, though it is one of them. A very common different definition is that declarative programs specify "what", not "how", i.e. desired outcomes are explicit, while algorithms for computing them are implicit. From that perspective, Prolog, SQL, constraint solvers, answer-set programming, etc. are declarative paradigms (albeit sometimes with non-declarative escape hatches, like Prolog's "cut"). But languages in which you write algorithms, whether C or SML, are something other than declarative, and instead are oriented towards explicitly specifying computations rather than the desired results of computations.

Its worth noting that declarative and imperative styles (and this is true of things like the functional and OO paradigms, as well), when applied as descriptors of languages, aren't binary categories. Lisp is a fairly early functional language (that is, one, compared to contemporary alternatives, focussed on the functional paradigm), and the functional paradigm is a declarative paradigm centered around the substitution model of computation.

OTOH, that doesn't mean it is impossible to have imperative code in Lisp.

Actually, I would say that protocols are more influenced by Haskell type classes than OO interfaces.

I think declarative should be contrasted with procedural, not imperative. Also, your reference/footnote [1] is missing.

> Actually, I think even Haskell still qualifies as imperative.

How so?

Technically, if a language can produce side effects, it's considered imperative. However, nobody really is going to say that Haskell is imperative.

Though there are some claiming that Haskell is the best imperative language because of having to state upfront what effects each bit of code can have. Since evaluation order is not very straight forward in Haskell, effectful code has to be explicitly ordered, which makes is very clear.

Haskell doesn't have side effects. It has effects. Side effects are effects that implicitly occur as a result of evaluation. That's why they're called side effects.

The language called Haskell cannot produce side-effects. You see the side-effects when you execute a program. That is, you cannot create side-effects within the language (except for escape hatches for FFI purposes).

This is different from other languages in that saying "what to print" is an actual command (hence, imperative) and not a value, as it is in Haskell (things that "will be commands when the program is run" are values in Haskell, as numbers and strings are).

> The language called Haskell cannot produce side-effects. You see the side-effects when you execute a program.

This is a rather meaningless distinction -- indeed, you can say the same thing about any other language on the planet. Programs written in C, Python, Ruby, ML, R, and Brainfuck only have side effects when you execute them. Obviously a C program just sitting on your hard drive unexecuted doesn't have any side effects.

No. It's a very meaningful distinction, actually.



will_print = puts "hello world" # it already print to the screen! and will_print does not have any meaningful value -- indeed it's nil

Haskell (demonstrated on the repl, which runs inside the IO monad):

let willPrint = putStrLn "hello world" -- nothing gets printed!

willPrint -- now the string gets printed (only because the repl is inside the IO monad which promises to execute action values)

Haskell (not in the repl):

willPrint = putStrLn "hello world" -- nothing gets printed!

willPrint -- this returns the same that 'putStrLn "hello world"' returns - an action value that, when run inside the IO monad, will execute its actions - so this still doesn't print anything, and we can pass willPrint around without side-effects, and compose values with willPrint

main = willPrint -- now the compiler will make good on its promise to execute any action value that is named 'main', and "hello world" will be printed to the screen.

The point of this is that you cannot print inside the Haskell language (nor do any other side-effect) - when you try to print you get this "action value" thing that you as a programmer cannot execute. Only the compiler can. So you can pass Ruby's equivalent of "puts 'hello world'" around, which you can't in Ruby.

So the distinction is quite clear.

> Programs written in C, Python, Ruby, ML, R, and Brainfuck only have side effects when you execute them.

They clearly don't. That's why when I write in Ruby:

puts "hello world"

Then the next line of code will be executed in a world where 'hello world' has already been printed to the screen.

Likewise, when in Ruby I write:

a = 5

a += 1

Then the next line of code will be executed in a world where a is 6.

So every line of code you write depends on the side-effects that came before it. Therefore, Ruby and most other languages have side-effects before code is executed. This is not the case in Haskell. Hence, no side-effects.

I know how IO works in Haskell, I don't need you to explain it to me. I still maintain the distinction you're making is meaningless -- Haskell code very clearly has side effects (mutating state, downloading files, sending data over a network, displaying pictures), and like any other programming language in existence, nothing actually happens until runtime.

Now you're just ignoring what I said. It's not about things actually happening at runtime, it's about having to consider the program as running in a changed environment after each line of code. I'm sure I've made my point to more mature people that never say things like "I don't need you to explain it to me".

That is also a sore point from the Lisp community in regard to Clojure.

Clojure devs are willing to compromise a bit to allow a good integration within the JVM and other implementations, thus allowing some shops to buy into Clojure.

Whereas many in the Common Lisp community will not, no matter what.

A long time ago, the inventor of the Clojure language Rich Hickey wrote a Java bridge for one Common Lisp implementation that was very good - I used it a lot for a while.

So, Rich has been thinking of the practicalities of Java interop for a long time.

I don't see CLers as having a problem with JVM interop per se. ABCL is often recommended as a perfectly respectable implementation for people who need Lisp on the JVM. But it's true that Clojure and ABCL take different approaches to it.

>Whereas many in the Common Lisp community will not, no matter what.

Common Lisp on the JVM: http://abcl.org/

Common Lisp/Java bridge: http://foil.sourceforge.net/

Foil's by Rich so that isn't the best argument :).

Usually the arguments I see being tossed around is Clojure's deviation of using [] and {} besides ().

Also the direct use of Javas API instead of more Lispy ones everywhere.

I tried Clojure, and I quite like it as a language (especially the thread-first & thread-last macros [1, 2]).

I don't, however, like the ecosystem surrounding it. I find Leiningen (while an excellent tool) far too heavy for the majority of my purposes. I much prefer other functional languages (e.g. OCaml, Haskell) that don't need such an intricate project structure.

With OCaml & Haskell, I can start with one source file in one directory and gradually build up to a more sophisticated layout as needed, rather than have a complicated layout imposed upon me by the project.

[1] http://clojuredocs.org/clojure_core/clojure.core/-%3E [2] http://clojuredocs.org/clojure_core/clojure.core/-%3E%3E

You can get started with Clojure without Leiningen. You can get started with Scala without SBT. These are build tools for serious projects, with lots of options and lots of things they do for you. Some people recommend them to beginners and while that's not necessarily wise, it goes to show that they are mature and OK enough that some beginners cope with it.

Yes, you can get started easily with Haskell - but then Cabal is such a complete piece of shit that people felt the need for a Haskell distribution with batteries included. And it is my feeling that you're mixing the cause and the effect here.

I agree. If Cabal was as good as Leiningen, then Haskell would has been way more popular than today.

Avoid success at all cost. :)

Leiningen prevents clojure from having the dependency hell problem from the beginning, which haskell was prone to but eventually fixed by sandboxing. Then would you say sandboxing is too heavy for haskell?

Now if by heavy, you mean the folders structure lein generates, then I disagree. Those folders actually keep the task of understanding any lein project easy.

If by heavy, you mean the JVM: You can compile the thing into a jar that runs on any jvm, or a js file using clojurescript. JVM, web browsers and Node.JS are very likable cross-platform layers, and I don't see why binary executable that are OS dependent compares.

I sure hope the giant, hideous, obtrusive diamonds inserted into the text to denote a hyperlink doesn't catch on as a trend. It's a great way to break the flow of the text and irritate your readers.

As for the idea of Lisps, well, it sure seems neat. But I've literally never run across a situation where I needed my code to edit itself. I've never run across a situation where the lack of an everything-is-an-expression-is-a-list feature prevented me from doing what I wanted to do.

So I just don't really feel the need to get repetitive strain injuries in my pinky from reaching for the parentheses all the time.

> As for the idea of Lisps, well, it sure seems neat. But I've literally never run across a situation where I needed my code to edit itself. I've never run across a situation where the lack of an everything-is-an-expression-is-a-list feature prevented me from doing what I wanted to do.

This is classic Blub paradox. You don't feel like you need a feature until you start using it, at which point you start wondering how anyone can live without it. Tools you have available limit the thoughts you can have. That's why it's always good to look for better and more powerful tools :).

OK, but I've been aware of Lisp macros for a while, and still never seen a situation where I needed them.

A couple weeks ago, though, I found myself in a situation where the user needed to be able to specify a filter, and the filter was going to be a tree of expressions, and the program had to take what the user specified and turn it into something the program could execute... and the filter tree looks a lot like an S-expression... hmm...

So I'm seeing something that could be done much easier in Lisp. It's still not a need for a macro, though (unless you're going to suggest that I use a macro to turn some user-writeable DSL into Lisp, and sure, it could be done that way.)

I'm not a lisper either, but I think the key idea, and I've looked but can't find the reference for this, is expanding the language space to intersect the problem space. Any given language has a domain of problems it's suited to: manipulating number, strings, objects with loops and conditionals and organizing code with data (oop). This is the language domain. Then users (programmers) create abstractions with the language to represent a problem domain (payment processing, cms, reddit, whatever).

The idea with macros is to extend the syntax of the language to move the entire language to the problem domain, for example the html generating dsl pg and company used at viaweb (see his writing).

This distinction is not as profound today, imo, because languages have come a long way. Php didn't exist when pg and friends were doing viaweb, today templating html is old hat, but at the time specifying a dsl in lisp macros to dyanmcically generate html was quiet innovative.

Everyone here is hoping their favorite language feature will added to Java 9, Python 3, C++14, Javascript, etc. Whatever feature that is, Lisp programmers would add it with a macro and move on. There is no central governing body deciding what I can or can't do with the language syntax. That's why macros are useful.

As someone who writes Lisp (well, Clojure) every day and does not particularly enjoy it, I find the common complaint about parentheses to be a non-issue.

In fact, I'm not sure I've heard of anyone who wrote any significant amount of lisp code and came away talking about parentheses. This seems to be mostly a reaction from people who've read a bit of Lisp without using it.

Then again, I just noticed that I type ( and ) with my third and fourth fingers, so maybe it would be worse if I typed properly :)

I would assume it'd be the same as people complaining about significant whitespace in python. No one who actually programs in python complains about the whitespace.

I developed in Python for several years and I disliked whitespace, because sometimes I want the flexibility to format code in a way that makes more sense given the right context and the whitespace was always in my way.

Plus, its whitespace-based syntax was used as an argument to not evolve the language, being one reason for why they haven't added proper anonymous functions.

I can't comment on LISP's parens much, for now it doesn't bother me, but Python's whitespace did and I tried liking it for about 3 years.

Python has proper lexical closures in the form of inner functions. Can some please enlighten me why one would still insist on multi-line anonymous functions? For documentation purposes it's a) better to give something a name, b) have a multi-line function in a separate place instead of inline in the form of a lambda.

Concerning whitespace, serious (large) projects have a very specific style guide, which includes prescriptions on whitespace. Python doesn't add any extra restrictions to that and in other languages a mismatch between whitespace and braces is a frequent source of bugs, so significant whitespace avoids that as well.

Only by not working with anonymous functions can anybody come up with such an impression.

Python has at least 3 features that are not needed in languages that have proper support for anonymous functions and that are more expression oriented:

1. for comprehensions

2. the with statement

3. decorators

You cannot work efficiently with higher-order functions until you have anonymous multi-line functions, period - also, Python's single-line lambdas would be a lot more useful if Python wouldn't have been so statement oriented, unfortunately in practice they are useless.

In regards to your points:

a) no, I don't buy that

b) ordering matters, we read code as we are reading text, top-down, left to right

> Python has at least 3 features that are not needed in languages that have proper support for anonymous functions and that are more expression oriented:

> 1. for comprehensions

Off the top of my head Scala, Erlang, and Haskell -- all of which are "more expression oriented" than Python (and all of which have robust support for anonymous functions including multiline anonymous functions -- the latter despite, like Python, having indentation-sensitive syntax), all have comprehension syntaxes like Python's for comprehensions.

So, I'm not entirely buying the idea either that "being more expression oriented" or "having proper support for anonymous functions" eliminates the utility of comprehension syntax.

Well, Scala and Haskell's comprehensions are monad comprehensions, mapping to operations such as map, filter and flatMap/bind. Python's for comprehensions only work on things that are iterable, which IMHO is a severe design limitation and makes them less useful than they should be. Think at Async I/O abstractions, like futures / observables / iteratees, which are not iterables.

And yes, if Python makes it easier to work with higher-order functions and such combinators / operators become the norm, then we'll talk about 2 non-orthogonal and conflicting features.

Python is the only language I know that added for comprehensions before proper support for anonymous functions. All the other languages I worked with (including Clojure, to be on topic) had anonymous functions before the syntactic sugar built on top. Clearly Python has a problem here.

> Python is the only language I know that added for comprehensions before proper support for anonymous functions. All the other languages I worked with (including Clojure, to be on topic) had anonymous functions before the syntactic sugar built on top. Clearly Python has a problem here.

Wait, first you claimed that Python wouldn't need comprehensions if it had better anon function support, and now you've claimed that Python has a problem because it added for comprehensions before anon function support, even though languages with anon function support still find the need for for comprehensions.

You seem to be convinced that Python is wrong, but not really committed to any consistency in the reason that Python is wrong.

The 2 statements are consistent.

You can live without for comprehensions if you have good anonymous functions support. If Python adds better anonymous functions support, its for comprehensions will be conflicting and much less useful than in languages that had good anonymous functions support from the beginning.

Either way, Python will never get multi-line anonymous functions support, since it's first of all considered to be un-pythonic. So we're having this discussion for nothing - I fell out of love with Python some time ago, if you still like it than good for you.

> You can live without for comprehensions if you have good anonymous functions support.

You can live without either, as decades of C programmers have demonstrated. Both are beneficial, as theeir widespread popularity in newer language attests. Neither is a perfect substitute for the other, as the fact that they tend to be both present in many newer languages, rather than being exclusive.

> its for comprehensions will be conflicting and much less useful than in languages that had good anonymous functions support from the beginning.

I don't see the "conflict" asserted here. I've used Ruby fairly heavily -- which has, in the relevant sense, far better anonymous function support than Python (even though it has different quirks) -- and certainly as nice as Ruby blocks are, a clean comprehension syntax is pretty much the main thing I find myself wishing I had sometimes in Ruby that Python has.

And, sure, Python's comprehensions may be less general than, e.g., Scala's, but switching them to be monadic rather than iterator based doesn't require better anonymous function support, it just requires changing which protocol they depend on.

Can you please explain to me what's important about these functions being anonymous? Why, specifically, they shouldn't be given a name?

How do you define working "efficiently with higher-order functions"? Given that Python fully supports higher-order functions, I am really curious what you could mean. I didn't downvote you, but it may have to do with your pointed assertion here, without anything in the way of an argument.

As to "ordering matters"; sure, but as the functions a nontrivial program calls are inevitably described as a graph, they must necessarily be defined in some arbitrary linear order anyway.

    some_collection.where(x -> x.name == 'foo').sort_by(x -> x.age)
These functions are tiny and trivial. They don't need names, and if you were to give them names, the extra weight becomes burdensome. Not just in syntax duplication, but the redundancy of the name as a comment on the trivial function body.

    nameIsFoo = x -> x.name == 'foo'
    getAge = x -> x.age
Note that giving the functions names has also changed the source order of the function bodies. Now you need to do a mental cross-reference to follow, instead of being able to read the definitions inline.

Sure, but those are one-line functions, which Python supports through lambdas. (Incidentally, some of them could be done with the operator module, without defining a new function). I would argue that once you write a multi-line function, it makes sense to give it a name and define it separately, which makes the "no multi-line anonymous functions" complaint less pressing.

Let me give you an example from my experience.

I wrote an elegant parser definition library, used for parsing specialized output of ... never mind.

It had hierarchic specifications (regular expressions etc) of parsing states, along with anonymous functions which stored away parsed data and sent the parser among different states.

New juniors could after a few hours use this to quickly parse complex text documents.

I cursed a lot while trying to port this to Python. :-( A dozen named sub-ten lines functions specified by names and referenced inside a parser structure? Just didn't work.

Edit: The point is, there are use cases where real lambdas are useful (except from map etc). It is just weird to argue otherwise.

I'm a python guy and I totally agree that multiline anonymous functions greatly enhance the readability of code in certain applications. You can do everything with named functions, but it's not always the best method for conveying meaning.

I can live without multiline anonymous functions - but I'd make use of them if they did exist.

Imagine having to build functions with names for each while, if/else and foreach statements. Because that's what it feels like when working with async I/O in Python, a complete pain in the ass compared to other languages.

Scala sample:

      cache.get[String]("name").flatMap {
        case Some(value) => value
        case None =>
          database.query("names").head.flatMap {
            case Some(id, value) =>
              cache.set("name", value, 10.minutes)
                .map(_ => value)

            case None =>
BTW, in this sample, for comprehensions are not that useful. But if you're using Scala-Async, you can write that in a style resembling blocking I/O:

      async {
        val cached = await(cache.get[String]("name"))

        cached match {
          case Some(value) => value
          case None =>
            val fromDB = await(database.query("names").head)
            fromDB match {
              case Some(id, value) =>
                await(cache.set("name", value, 10.minutes))

              case None =>
In both cases multi-line anonymous functions are leveraged.

> Can some please enlighten me why one would still insist on multi-line anonymous functions?

Reduced visual clutter and better flow in reading code.

> For documentation purposes it's a) better to give something a name, b) have a multi-line function in a separate place instead of inline in the form of a lambda.

I disagree: if the only place its ever used is in a particular call to a higher order function, it reduces the difficulty of reading the code -- if its not a large multiline function -- for it to be directly in the call as a lambda. Having it named is useful (1) if it needs to be referenced more than once (DRY), or (2) if it is large enough that it breaks up the flow too much for it to be included in one place (which is a somewhat subjective cut-off, but for me 1 line is far below it.)

EDIT: That being said, I'm mostly fine with Python the way it is -- while I sometimes wish a way to fit multi-line lambdas without disrupting the rest of the language could be found, I'm not sure I can see a good way for it to work, and its not really essential.

I find it a pain and error prone when having to change indentation in python code. E.g. when adding an 'if' in front of a block of code. So easy to miss a line or mess up the indentation in the block itself and then you might not spot an error until run time.

It's not a pain if you use an editor which lets you indent a whole block of code at once; e.g., in vim, use shift-V to select lines of code, then >> or << to in- or dedent. This is handy for all programming languages, of course.

Having groups of lines with the same indentation act as text objects helps too.


I was under the impression that Python emits an error about inconsistent indentation at compile time (the initial parsing and interpretation of a script file), not runtime (+x time units later, after the program has started). Is that incorrect?

I'm talking about something like this..

    def foo():
later I change it..

    def foo():
        if something():

oops... and_this() should have been in the if block and I won't find out till I run it.

(imagine a much bigger more complex example of the above function)

In large pieces of code this can be easy do. If you are forced to use parenthesis it's much more difficult to make this error. One could argue that experience prevents you from doing this but I have sadly found this not to be the case in practice.

> If you are forced to use parenthesis it's much more difficult to make this error.

By "parentheses" you mean delimiters, which create visible boundaries to defined areas in code. Parentheses are an example of delimiters, but not all delimiters are parentheses.

Bash has if ... then ... else ... fi

Ruby has if ... else ... end

C/C++/Java have (logical test) { controlled area }, nested to any practical depth.

And so forth. Python doesn't.

> One could argue that experience prevents you from doing this but I have sadly found this not to be the case in practice.

This is an argument against complex functions that do a lot, as opposed to breaking program logic up into smaller blocks that are easier to understand and control. The old argument against this practice was that a large function that did everything was faster than the same logic broken into smaller blocks. A modern compiler will generally prevent this from happening.

Well I used the word parenthesis since the topic was Lisp.

> This is an argument against complex functions that do a lot

Functions large and complex enough to make this problem significant seem to be the reality I have to deal with when programming in the large. It's only my opinion but a language feature that improves my real world experience at no cost is a bonus.

I fail to see your point. You could accidentally put a brace in the wrong spot, or use an if statement without braces in a C-like language, and it would stand out visually much less than the code you cite above.

After you write a fair amount of Python code (<5000 lines I guess), that idiom will strike you immediately.

Such errors are caught at compile time. I think the prior poster was speaking in a general sense in which runtime means any time after source file editing.

> No one who actually programs in python complains about the whitespace.

I wouldn't say "no one." I read comments regularly from people, usually students, who get into trouble with whitespace in Python, especially if they mix tabs and spaces in the same source file.

If you're mixing tabs and spaces in the same source file, chances are that you're not a very seasoned python programmer. I would imagine a large segment of students also complain about parens in lisp.

Though to be fair, there would be some people that use python day-to-day that don't like significant whitespace. I'd bet there'd be at least a few lispers that don't enjoy wrangling parens.

I program in Python for a living and I curse significant whitespace every single work day. Especially when I try to copy paste some code into the shell. It's one of the worst misfeatures of the language, and it wouldn't even be necessary if there was an "end" keyword.

Use IPython and %cpaste magic. Works well.

I use IPython. %cpaste doesn't work all the time if you're copying from the middle of a function. And even if I just paste a one-liner that starts with a few spaces, why the hell does it even bother to complain about that? Oh wow, this one line of code (which happens to be the entirety of code I'm asking you to execute) is indented wrong, tell me something I don't know! What a PITA.

I program in python (and C/C++) and really dislike python's whitespace handling. I think giving semantic meaning to one of the least-standardized aspects of text (tabs/spaces) is a bad decision through and through.

That's the thing. Python does standardise it. 4 spaces, no tabs.

Not quite...


Spaces are the preferred indentation method.

Tabs should be used solely to remain consistent with code that is already indented with tabs.

Python 3 disallows mixing the use of tabs and spaces for indentation.

Python 2 code indented with a mixture of tabs and spaces should be converted to using spaces exclusively.

When invoking the Python 2 command line interpreter with the -t option, it issues warnings about code that illegally mixes tabs and spaces. When using -tt these warnings become errors. These options are highly recommended!

PEP 8 was published in 2001, a good 13 years ago. That's about as standard as you can get.

It's a zombie complaint.

I feel it's actually easier with parenthesis because with always well formed lisp you get the benefit of incredible tools like paredit-mode which make it easier to type than in other languages and I say this as someone who does full time c++ with a fair bit of python, javascript and html and I only do lisp stuff as a hobby.

I was looking for a way to summarize lisp syntax benefits. Most BNF-heavy languages are statically appealing. They have layout and differentiators that makes it nice to look at.

Lisp is a building material, the expressions are objects, the syntax is object. You don't want to look at it, you want to model with it, live. And with the "metacircular" (sorry for the sophisticated looking lingo) mindset, you can program how you interact with it (protorefactoring ala paredit/redshank etc).

Most people can't judge s-exp properly because they're not playing with it only looking at dead printouts; and the few who does are using a textbuffer[1], avoiding one important programming rule : automate everything. Lisp syntax has a simple and potent programmable API giving you a lot of power for free.

[1] I even watched a lisper using emacs without paredit at a meetup and it was painful.

In fact, I'm not sure I've heard of anyone who wrote any significant amount of lisp code and came away talking about parentheses.

Whelp, now you have. Hi! :)

Although I sometimes miss the goodies homoiconicity brings, I much prefer the increased syntactic structure of other languages.

If you write Lisp every day why not remap the ( and ) to a non-shifted position? I use the [ and ] keys but I use Common Lisp, not Clojure.

FWIW In DrRacket the key ] inserts either ], }, or ) to close whatever is open.

I find it odd that people use their pinky to type parens. When my fingers are on the home row, '(' is above my middle finger, and ')' is above my ring finger. Typing them with my pinky would require me to contort my hand.

> When my fingers are on the home row, '(' is above my middle finger, and ')' is above my ring finger.

If your fingers are on the standard home keys (index fingers on "F" and "J"), '(' is between the middle and ring finger and on the normal (inward) path of the ring finger as it extends, ')' is between the ring finger and pinky and on the normal path of the pinky as it extends.

OTOH, if you keep your right index finger on the "K", which is less standard, then those become naturally on the middle and ring (and it may make sense for programming, but less for general typing -- the standard home position is based on where the letter keys are, but the extra symbols on the right are more used in programming than general typing, so moving the right hand one key out makes some sense.)

It's not odd... it's traditional touch-typing. It should be your ring-finger for ( and your pinky for ).

Many people seem to type it the way you have just suggested (middle/ring) - myself included. I'm currently re-training myself to do that properly.. it feels weird at first, but I'm starting to get the feeling it will actually be faster.

Your touch typing can always be better, right?

For me it is the prefix notation and still find nested expressions difficult to read.

Why don't you enjoy it?

If you've never had the possibility to do so before it's not surprising that you never felt the need to do it. Languages affect your way of thinking and implementing algorithms.

On the other hand, once you've tried the sweet honey of lisp macros, going back to C macros makes you sad. There's not a week that goes by without me being frustrated at the rudeness of C macros. Not to mention languages that don't offer a macrosystem at all. And the parens thing is a tired meme.

I agree with you for the diamonds though, very distracting.

The first macro system I learned was Clojure's. Only once have I used a C style macro, and that was in my assembly class.

I noticed the article mentions that some don't like calling syntax extensions macros because of the comparison to Common Lisp macros. To that, I say that's a crazy way of thinking. Macros are necessarily dependent upon the semantics of the language they are for, so anybody who thinks that all macro systems are identical are the people who probably wouldn't go out of their way to learn a language with a non-textual substitution macro system, so changing the name for that purpose seems foolish. On the other hand, syntax-extension or syntax-transformations are a more descriptive term on their own merits.

And yeah, the diamonds are silly. Better to just color the word.

"But I've literally never run across a situation where I needed my code to edit itself."

Well, "need" is a strong word. I mean, you could probably argue, in a philosophical sense, you never needed to write any computer programs in the first place (plenty of people go through life without doing so).

However, it is very likely there were times in your life where having your code edit itself would have led to getting a program working in less time, with fewer bugs, or better performance. Which of these benefits apply, of course, depend on the problem you were trying to solve.

"So I just don't really feel the need to get repetitive strain injuries in my pinky from reaching for the parentheses all the time."

Um, no. Rich Hickey has pointed out idiomatic Clojure requires fewer parens than the equivalent Java.

I mainly develop software in C# and Python at work, and I sure do miss some of the convenience that lisp-inspired languages offer. I'm responsible for the overall architecture of quite some large-ish enterprise apps. There's a reason why many Java and C# business applications have tons of MySuperAbstractFactoryProvider classes and use reflection-based look-up of metadata attributes obsessively. There's just no other way to provide a flexible framework that meets the changing and often arcane demands of our customers.

About macros, in the case of C#, what, if not a kind of macro, is the 'using' statement that works with IDisposables? If it's useful here, why not in other places?

giant, hideous, obtrusive diamonds inserted into the text to denote a hyperlink

I hadn't even noticed that they're hyperlinks, I thought something had went wrong with the text formatting or character set!

I've literally never run across a situation where I needed my code to edit itself.

Needed to is too strong a statement. I've never needed to write a macro (and in Clojure, its somewhat frowned upon to write macros when normal functions will do), but sometimes it saves you from a lot of working around limitations (and every language has limitations). It also allows things to be added to the language as libraries that otherwise would have to be built in - the majority of programmers won't need to do this, but if you do, its awesome knowing that its possible. Clojure's core.async is an excellent example (Go-style goroutine's, as a library).

I've never run across a situation where the lack of an everything-is-an-expression-is-a-list feature prevented me from doing what I wanted to do.

Of course not, but you can also do everything you want in assembly. What everything-is-an-expression-is-a-list gives you is 1) uniformity - everything works the same way, so it lowers the cognitive load; 2) simpler code - if you need a statement in an expression, you can do so, and other languages don't prevent you from getting the same result - they just take more code, or the code is more complex, or you use less-than-ideal constructs, or...

So you never need these things, but they make life more pleasant to have them.

I just don't really feel the need to get repetitive strain injuries in my pinky from reaching for the parentheses all the time.

I program in Clojure fulltime and I don't find that it has any more parentheses than a language like C++ or Java does. On top of that, anyone who programs in a Lisp for a while will use some form of paredit, which makes working with parentheses a breeze to the point where I find I type less than I do in other languages because jumping between parentheses, splicing parentheses-enclosed lists, wrapping things in parentheses and such tasks are a single keypress that just isn't possible in other less-parentheses-focused languages.

In short, just like significant whitespace in Python, parentheses are a non-issue in practice (after a short adjustment period).

But at the end of the day, to each his own. If Lisp doesn't do it for you, then that's fair enough - you don't have to use it :-)

> I program in Clojure fulltime and I don't find that it has any more parentheses than a language like C++ or Java does.

People see the parentheses and go crazy, but in fact it is just a matter of moving the opening parentheses.

I now use this approach when explaining Lispy languages to others.

Yeah, I remember once a friend complaining about all of the parentheses. I asked him to literally scroll to the bottom of the java file he was at, where it was about as many closing braces as I have ever seen closing parens.

I realize this is not necessarily the norm. But it was hilarious in context.

Good call on the diamonds.

Echoing everyone else, basically, your other points are a bit shortsighted.

You never need code that edits itself - completely true. Completely missing the point. Most programming features fall into this category; you don't -need- them, you could always use assembly instead. The point is that once you get them into your brain as an option, problems that might otherwise be tricky or time consuming become much easier.

To that end, everything is an expression is a list...yeah, you don't need it either. But, assuming the above (that code that can edit itself turns out useful sometimes), imagine how easy it is to metaprogram when all your executable code is just a list, and you already know how to modify lists.

In either case, you don't -need- the feature, sure, but a moment's reflection might open up the possibility that once you fully grok the ramifications, and have it amongst your other programming tools, you'll find a good use for it.

As to too many parentheses, as someone else mentioned,

  foo(a, b, c)
simply becomes

  (foo a b c)
Not really any more of them.


    c = sqrt(a*a + b*b)

    (setf c (sqrt (+ (* a a) (* b b))))

Amusing... but to be fair, this is also an operator precedence issue where C is letting you drop several pairs.

>where C

...I think operator precedence allows the removal of the parenthesis for at least: Ada, C, C++, D, Dylan, Erlang, Fortran, Go, Haskell, Icon, Java, Javascript, Julia, Lua, Mathematica, OCaml, Pascal, Perl, Prolog, Python, QBasic, R, Ruby, Scala, and TCL.

I don't see how that is supposed to be a valid excuse. Non-lisps tend to support operator precedence, reducing the number of parens significantly. Thus, the argument "other languages require the same number of parens" is completely false.

For a book typsetting program that's trying to produce html without a lot of writing overhead, some of those features are pretty great. Steve Yegge has a related blog post (the context is emacs): https://sites.google.com/site/steveyegge2/the-emacs-problem

Being able to write new control structures can come in very handle. Imagine if your language of choice had exceptions/errors, but didn't have try/catch/finally. Now imagine you could just implement that structure as a command/macro. That's a rather extreme example, but it does highlight the power available.

Yeah, middle clicking on the page to start a scroll got weird as well.

In defense of why-homoiconicity-is-cool: it's useful to build pipelines of things in code.

Imagine if you had a series of functions that manipulate an audio signal. You could create a pipeline of these on the fly just by putting the functions in a list, and then reflecting the data against it.

You can imagine exposing this to a user - they check some boxes saying what features they want, and then the pipeline gets assembled for them.

This would be do-able but laborious in Java.

Javascript and iolanguage are homoiconoic language without the parens emphasis.

> Imagine if you had a series of functions that manipulate an audio signal. You could create a pipeline of these on the fly just by putting the functions in a list, and then reflecting the data against it.

That's not an example of homoiconicity.

Thanks, and apologies to the readers.

I didn't mind but note that he wrote http://practicaltypography.com.

As for not missing stuff from Lisps, I think opportunities start appearing once you start using it more. But I agree, it doesn't prevent you from getting things done.. I don't see any Lisp in the Go space (native compilation, great networking features), so I find Go more pragmatic at this time.

I think Gambit Scheme has both of those features. Granted, it doesn't have the community or momentum of Go, but it exists.

Add Chicken Scheme (http://www.call-cc.org/) to the list.

> I don't see any Lisp in the Go space (native compilation, great networking features)


Proper Lisps compile to native code, both JIT and AOT.

Plus there are quite a few code commercial compilers with networking libraries.


Don't judge Lisps by the open source alternatives.

Sorry, instead of "native compilation" I actually should have said "compiles to a single static binary".

Which Lisp compilers also do.

If you took a Common Lisp programmer from the early to mid 90s in a time machine to today, very little about current programming languages would seem novel or an advance over what he or she was using then.

I think this is a reason for much of the smugness of Lisp programmers. Whatever features you think are new or cool or advanced about your programming language, Lisp probably got there first.

Nonsense. We've figured out how to do type systems. We can even fully infer types if you're willing to accept some quite reasonable restrictions on how polymorphic your code is. We have a bunch of reasonable approaches to effect tracking, which Lisp folk used to have to do by hand (that story about the T garbage collector sounds like the most unmaintainable piece of code I've ever heard of). We know how to solve the expression problem.

Modern "pragmatic"-oriented languages have little that wasn't in Lisp - or in Algol. Which is fine! The innovation is about the wider ecosystem (a lisp programmer from the '90s would be amazed by CPAN, never mind maven), or about making common tasks easy (a lisp programmer might eventually put together a collection of idiosyncratic macros with much of the functionality of Rails, but that wouldn't be a project you could hire someone else to maintain). OTOH modern "researchey" languages, like Coq or even Haskell, are far far ahead of Lisp.

> OTOH modern "researchey" languages, like Coq or even Haskell, are far far ahead of Lisp.

I know Common Lisp well, and I honestly tried to learn and to use Haskell. I realized that Haskell has the advantage of a strong type system but it seems only to be useful for language research (compiler writing) and mathematical applications. Haskell is (in my case) almost useless for every day real world applications. It is a very good choice if I know the algorithm and data structures exactly in advance. But real world applications often require changes which are not predictable. It is a pain to align a whole Haskell project according to new requirements to make the whole system work again. It takes far too much time, and I wonder where the benefits are if other weaker typed languages can do the same job as well, or even better.

In the Lisp world I can develop prototypes very quickly. And if I need a strong type system as in Haskell then I can use Shen (shenlanguage.org) which has a strong rule based type system on top of Lisp. That means that I can use both worlds -- the default dynamic system of Lisp, and the strong type system of Shen.

Lisp is still very powerful and yet small in comparison to the huge LLVM system of Haskell. It can be used on embedded systems (ECL for instance) which Haskell is not able to do (AFAIK).

I don't know what you exactly mean with "tracking". If that means tracing then Lisp even offers backtracking (debugging back in time), and I think almost every Lisp programmer knows Slime.

So the claim that Haskell is "far far ahead" of Lisp is not true at all. The only real weak point of Lisp is a good GUI library. I wish there were strong support of Qt and HTML5.

Another favorite language of me is Nimrod because it joins the best of several languages (Python, Perl, Lisp and C) together.

> It is a pain to align a whole Haskell project according to new requirements to make the whole system work again.

I'll quote a recent tweet by Chris Done: "I feel like 80% of Haskell advocacy should involve screencasts of people refactoring large codebases."

In my experience, refactoring is easier with a strongly typed compiler, not harder. It may take more time and work to get your program to "run" again, but the end result will be more likely correct, and more likely to be a better design.

This has generally been my experience as well when I'm working with almost any reasonable type system. Every time I've focussed on leaning on the type system, it's made refactoring much quicker for me and left me feeling way more confident.

Every time I encounter a bug I think "is there a way I could have encoded this into the type system so it would have never compiled with the bug?" The thing I like about Haskell is how often I can say "yes" to that question and implement it. From what I understand, that question would would "yes" even more often with dependent types, though I've yet to really learn much about them.

I really dig lisp (especially clojure) but the thing I miss when I'm using lisp is a really solid type system.

There are other ways of making refactoring easy. I'm not trying to dismiss a powerful type system - which is a great help indeed - but pervasive unit tests and contracts work as well. Linters are a nice addition, and even gradual typing like in Erlang with Dialyzer is much better than nothing. The point being: problems solved by type systems are being solved with other tools, too. I don't want to argue about how well these tools work, because all of them (including static type systems) have relative strengths and weaknesses; it's just unfair to present refactorings or correctness guarantees as benefits reserved for type systems alone.

I'd argue that you can rely more on powerful types because they lend more to the totality of your program. This is just a hunch/intuition though. If I'm wrong, I'd be glad to know why and how so I can correct that ;)

If it takes longer to refactor the program, that can kill the initiative to do so.

When you do get the program to run, but it turns out that the refactoring is not wanted for whatever external reason, then you just wasted more time than was necessary.

Without the bondage of static typing, you can just refactor a small enough fraction of the program, which can run now in some limited way. You can use that to motivate yourself, or to pitch the idea to other people.

It is helpful to be able to play "what if" without committing a lot of unnecessary time and effort.

I fully understand the idea about motivation (and my "let's muddle through"-oriented brain is always tempted by this), but I think it goes both ways:

If it takes short to refactor the program but, because of the lack of safety checks, what seemed to be working "in some limited way" turns out to be a big ball of bug-ridden mud in the end, and maybe the whole approach wasn't sound, wouldn't that also kill motivation for future refactors? And what if keeping the motivation high for the more impulsive programmers leads to worse software?

Doesn't this look like something analogous to the old carpenter's motto: measure twice and cut once? "But I don't want to measure, I want to cut it now, it'll be faster! Measuring kills my motivation!" :)

I know CL, I know Haskell. Really what OP said has nothing to do with refactoring (which is indeed painless in Haskell) but more to do with interactive programming.

Haskell is not terrible at interactive programming, but compared to Common Lisp and Smalltalk, it's garbage.

And this is exactly why it feels so kludgy and awkward when one tries to do bottom-up programming and organically evolve his program, the standard way of programming in CL and Smalltalk since they are image-based languages that are designed for exactly this.

This is one of the benefits that one with no prior experience in working in this way can quickly attempt to dismiss, but it's really THE most important reason for someone to use CL rather than Haskell. By having a language that allows you to program in a live environment, from the inside and keep the entire system running at all times, you become one with the machine. The feedback loop is kept extremely short, and you're in a constant state of flow.

It's too bad when people dismiss CL (and Smalltalk) based on trivialities (type systems) that are completely irrelevant in the grand scheme of things. Writing code in Haskell to me feels like a sterile, boring process similar to working out mathematical proofs. That's not to say that Haskell doesn't have its place. When the domain is well-defined and I know exactly what I want to do and how I'm about to do it, I'm more inclined to reach for Haskell or Ocaml than Common Lisp.

This is not the case for me most of the time however. When I program for myself, I find that I often have a general idea of what I want to do but no specifics in mind. It is then that I reach for Common Lisp in order to flesh things out and get inspired if you like. Or, similarly, when I'm doing a project that is hard, the domain uncertain, something with solutions that are not obvious ahead of time. Common Lisp and Smalltalk shine for these sort of problems, there is nothing else that comes close in my opinion.

Common Lisp but not Scheme/Racket I hear you say? Yes, this is indeed the case. Racket is just bad at interactive development of the kind I just described, because its developers deemed image-based programming 'too confusing' and removed all support for it from the IDE. This should also explain why it's not mentioned at all in the original article. Moreover, I find that in order for someone to truly appreciate how empowering image-based development is, one needs years of writing code in traditional non-image based languages. Which is why the best feature of Lisp in my opinion is swept under the rug by most newbie programmers who try out Lisp. They lack the experience to realize what they are missing.

Which is also why the original post author completely misses the point. The real power of Lisp comes from what I just described, not from 10 or 20 bullet points that one needs to side-by-side compare with other languages. By his admission he is a newbie programmer, and a newbie to Lisp too, made evident by his misunderstandings about the Common Lisp macro system vs Scheme "hygienic" macros. Hopefully he will grow and learn to appreciate Lisp for what it really is.

Finally, writing code in Common Lisp or Smalltalk makes me feel like an architect of my own private universe. This is exactly what programming should feel like, an organic process, fusing mind and machine together if you like.

Very much so. I do quite a bit of abuse to my C code to get similar (if weaker) assistance.

> It is a very good choice if I know the algorithm and data structures exactly in advance.

I'd say the exact opposite: the great strength of Haskell lies in its powerful support for abstracting things out so that these things can easily evolve independently. The hard part of taking advantage of this seems to me to be that we're used as programmers to not having those tools, so it is hard to get used to thinking about doing that kind of abstraction.

I understand that. I really like Haskell but I wonder if the type system is actually too strong for the real world.

Generalizing algorithms sounds nice but if Haskell is so powerful, why is the Haskell community still not able to provide a convenient working package manager? Perl has CPAN, Ruby has Gems, Lisp has ASDF and Quicklisp but Haskell is still stuck with buggy Cabal.

I have been in the Cabal hell many times. I know that there are sandboxes, and I considered to install Nix and NixOS just to use Haskell. It surprises me very much that a simple package manager is such a big issue in Haskell. This is a typical example for a real world application that causes strange difficulties in Haskell where other languages have no issues at all.

FPcomplete is already working on a new solution with snapshots. I am curious how convenient it will be.


Cabal is not buggy. Or, more precisely, Cabal Hell is not the result of bugs in cabal. It is the result of 1) static linking with cross-module inlining, 2) a lot of diamond dependencies, and 3) a lot of breaking changes in the ecosystem.

1 is arguably an artifact of Haskell - performance really suffers if you can't do this - but also something that's arguably desirable generally.

2 is the result of the ability to provide abstractions and tools that are useful in a phenomenal number of contexts (mostly a good thing) and the failure of the language-as-standardized to provide these directly (mostly a bad thing).

3 is the consequence of an active community that values experimentation, is willing to try new things, and places an emphasis on Getting It Right. This is actually more mixed a blessing than it sounds.

> I really like Haskell but I wonder if the type system is actually too strong for the real world.

Given the amount of "real world" programming that's been done with Haskell, I suspect not.

> Generalizing algorithms sounds nice but if Haskell is so powerful, why is the Haskell community still not able to provide a convenient working package manager?

I don't see a real problem with Cabal with sandboxes.

> Perl has CPAN, Ruby has Gems, Lisp has ASDF and Quicklisp but Haskell is still stuck with buggy Cabal.

Cabal doesn't seem to be buggy, and is about on the same level as Ruby Gems (I think the difference is that the version bounds conflicts that manifest at package install time in cabal occur at runtime with gems, because rubygems installs multiple versions in the local repository but needs to import a single version at runtime (bundler, which is separate from gems, addresses this, among other practical issues with gems.)

Dependency hell is common to package systems that don't have a single version of every package that is curated to work with all other packages in the set -- CPAN isn't immune to it either, at least from what I've seen on the web (I haven't used Perl at all in many years, and only very little then.)

Cabal now that is has sandboxes available, seems at least on par with gems.

> FPcomplete is already working on a new solution with snapshots.

Its really an intermediate stage between what package repositories/managers like cabal/gems/cpan provide and what Haskell Platform provides. Its not really a "new solution" for the same issue. [1]

[1] See the discussion here: http://www.yesodweb.com/blog/2012/11/solving-cabal-hell Cabal (and gems, etc.) operate at level 2, the new Stackage from FP Complete is at level 3, Haskell Platform is at level 4.

Wait, are you suggesting that the Haskell type system is too strong to write a package manager? That makes no sense. A much more valid reason that Cabal is annoying is that Haskell is a compiled language that uses static linking, unlike all the languages you mentioned, which are interpreted.

SBCL also uses static linking. It doesn't recompile every ASDF imported library whenever I compile my application. It just does so when it is actually necessary. Why doesn't it work so easily in Haskell?

I think it's the strong type system because every small code change can break the whole fragile structure of a Haskell application. It's like wheels in an old clock. If you break one jag of a gear then the whole system stops. You have to take apart all gears, change the affected one, and assemble everything together. Cabal hell breaks loose when you realize that someone changed the shape of a small gear, and you don't have access to the old one, except you stored some in your sandbox. Over time you get a whole farm of sandboxes full of obsolete parts. Then after a long time when you have to maintain some of the old code you realize that you have to edit even the foreign (!) libraries yourself because they are totally obsolete and incompatible to the current ones :-) Could that be the reason why there are (AFAIK) so many hackage projects aged or even unmaintained?

> SBCL also uses static linking. It doesn't recompile every ASDF imported library whenever I compile my application. It just does so when it is actually necessary. Why doesn't it work so easily in Haskell?

It does; in fact, in GHC -- the dominant Haskell implementation -- at least, its even easier: it doesn't recompile installed packages period. Packages are precompiled, and are merely linked when you compile an application that they depend on.

Aside from it being factually incorrect, I'm not sure why this is even relevant to the discussion here -- "dependency hell" doesn't come from recompiling libraries when you recompile applications, it comes from conflicts between version requirements for dependencies among packages installed in a single repository (which is why sandboxing is a tool for addressing it.) It manifests when you try to install a package with dependency conflicts with another app, not when you try to compile an app that depends on the conflicting packages.

> I think it's the strong type system because every small code change can break the whole fragile structure of a Haskell application.

I think before you try to explain why something happens, you should first be sure that the thing you are describing actually happens.

> Could that be the reason why there are (AFAIK) so many hackage projects aged or even unmaintained?

I suspect the reason is the same that there are so many Ruby Gems in the main repos that are aged or even unmaintained (and I'd be surprised if this was different in any similar package database that wasn't actively culled of non-current projects -- which would have its own downside) -- the repository system is, by design, low cost to enter, so lots of things that have no strong demand or lasting commitment get posted to it.

"In the Lisp world I can develop prototypes very quickly."

In my experience, this is almost entirely due to familiarity, and not due to any specific or general language features.

"...I can use Shen (shenlanguage.org) which has a strong rule based type system on top of Lisp..."

I really, really need to bump Shen up on my list of things to play with.

Disagree about Haskell being even remotely useless for every day real world applications. Haskell is good for reducing complexity and being able to rely on guarantees. A couple months ago I wrote a screen scraper/web interface that was basically an expired domain purchasing application in Haskell.

> We've figured out how to do type systems

This is probably the only valid point you have.

> a lisp programmer from the '90s would be amazed by CPAN, never mind maven

CPAN was started in 1997. A lot of Lisp programmers back then were also doing Perl, and a few people who were doing Perl then are doing Lisp now.

Whoever is "amazed" by Maven should go jump off a bridge.

> We have a bunch of reasonable approaches to effect tracking, which Lisp folk used to have to do by hand

I'm assuming you're talking about generalized arrows or something and not monads. Generalized arrows partly came out of work on linear types and parallelism done by Lisp people in the 80s and early 90s.

> We know how to solve the expression problem.

Thanks mostly to PLT/Racket people: http://en.wikipedia.org/wiki/Expression_problem

> blah blah blah Rails functionality

Rails doesn't have functionality, Rails has a bunch of shitty complicated ways of doing web programming, with a bunch of shitty poorly maintained Ruby libraries (most of the good ones are just FFI into C libraries).

If you're going to talk about modern "practical" languages, you could do better. Go is pretty great except it doesn't have exceptions. JavaScript solved code distribution and program-level virtualization. Those are much better comparisons.

> Nonsense. We've figured out how to do type systems. We can even fully infer types if you're willing to accept some quite reasonable restrictions on how polymorphic your code is.

That dates back to the 70s: http://en.wikipedia.org/wiki/ML_(programming_language)

To be clear, they were far ahead of lisp in the 90s too.

The major innovation is probably more societal in these languages. We discovered that they really are much better for writing reusable abstractions on the level of the whole community. We also ran into some issues with scaling those abstractions at the level of 10s of thousands of packages and are slowly coming to solve them.

Type systems are the one exception, but it still remains broadly true that Lisp was way ahead of everybody. There are many things we could talk about besides type systems:

Lambda expressions - just now reaching Java and C++, been in Lisp forever

Garbage collection – (obviously)

Turing-complete (edit: fully evaluating) macro systems – I've heard C++ is moving in this direction (not sure to be honest) but Lisp is still ahead on this

Gradual/optional typing – Others have been moving towards this, CL has had it forever

Interactivity/REPL – e.g. Swift may finally give a REPL to systems programmers, Lisp has had it forever

Dynamism/late-binding – Lisp is still way ahead on this, goes hand-in-hand with interactivity

Everything serializable – Lisp is still ahead

Structured code editing (paredit) – Lisp is still ahead, not sure anyone else even knows what this is

Multiple inheritance – I think this is coming to Java finally

Image systems (hibernating a running process) – Still ahead

Condition system / advanced exception handling – Still ahead

Functional programming techniques (map, reduce, etc.) – Still ahead

Keyword arguments – Recently added to Ruby, Lisp has had them forever

I don't know about Swift but equating a REPL to interactivity is a horrible disservice to Lisp (and Smalltalk).

For example, Python has a REPL but it is not very well suited for interactive programming. For example Python does not handle you redefining a module after you import it without some voodoo. Nor it has forward references. For example in Lisp I can define a method that especialices on a class I haven't yet defined. Or define a function that calls function B which I will define later in the code. Nor have I seen something akin the slime Inspector and presentations, which are not quite there as the Symbolics Genera environment for interactive programming

Excellent point. I found myself struggling to describe Lisp in terms that non-Lispers would understand (blub paradox I guess), and I figured almost everyone knows what a REPL is. You're absolutely right that a REPL itself is just scratching the surface, and there is all kinds of interactivity beyond that.

A couple of quibbles with your list:

> Multiple inheritance – I think this is coming to Java finally

But it's been in C++ for a very long time.

> Functional programming techniques (map, reduce, etc.) – Still ahead

Still ahead of who? Haskell? How so?

Still ahead of mainstream programming languages, which are typically imperative by design, and usually overly "noun" focused compared to the "verb"-friendly nature of Lisp. Adding map/reduce/etc. to the noun languages is a bolt-on that is not exactly ideal.

Take Java 8 which now has map: sure, you can pass a lambda expression with a custom function body, but is using functions (particularly side-effect-free ones) as arguments idiomatic otherwise? I can only imagine the amount of ceremony code that ensues when trying to merge those worlds. Don't get me wrong, having map/reduce/etc. is an improvement to Java for sure, but Lisp is still well ahead on the overall cohesion and utility of such things within the context of the rest of the language.

Multiple inheritance is much more complicated and restricted in C++ than in Common Lisp. As with many things in C++, you really have to know what you're doing to use it correctly. In CLOS it pretty much just does what you expect, and only when you want to do something unusual do you have to learn the finer points.

"Turing-complete macro system - I've heard C++ is moving in this direction (not sure to be honest) but Lisp is still ahead on this"

1) Turing-complete is a bug, not a feature.

2) C++ template metaprogramming has been Turing Complete for ages.

3) There are nonetheless significant limitations on what you can do with templates in C++, compared to what you can do with macros in Lisp.

"Turing Complete" is a theoretical construct that doesn't really mean much other than giving you easy ways to prove that certain questions about the system can't always be answered. One significant reason for this is that "Turing Complete" merely means that you can embed any computation in the system in some encoding - with something like a macro system, the encoding you generate is the whole point.

I understand what you're saying. What I mean is not the trivial "nand is turing complete" sense, which I agree is harmful and silly, but full eval capabilities. Lisp macros can fully evaluate Lisp code, and I consider this a great feature, but I don't know how else to describe them. "Full evaluation macros"?

I think "powerful" is probably sufficient :-P

To my mind, having eval available is a cheap way to make sure you are sufficiently powerful. In principle, it's not the best way. In practice, it's arguably the best way out of options at hand, though some alternatives compete.

> What I mean is not the trivial "nand is turing complete" sense, which I agree is harmful and silly, but full eval capabilities.

The two are equivalent (provided that the target language that can be evaluated is itself turing complete) -- that's rather the nature of turing completeness. If you have the former, you can make the latter with it and nothing else; if you have the latter, you necessarily also have the former.

Yes, it was a poor choice of terms. To be clear, eval isn't equivalent except within the narrow theoretical notion of turing completeness: it can do I/O, for instance file I/O. And it accepts high-level language code directly which is very important in a practical sense compared to, say, accepting nand instructions only.

It's hard to describe because even "full access to the language at compile time" would be useless if that language were C++. The advantage of macros is that you have access specifically to Lisp at compile time, not just any old blub language. And that just circles back to "Lisp is great". :)

Java/C++ are not the world. Your list is not the "features I think are new or cool or advanced about my programming language".

Fine, but can you name some language features that came out after the '90s that weren't in Lisp already? Excluding type systems?

Newer features tend to use type systems for part of their implementation because they're a powerful tool, but I feel the feature is its own thing. I'm excited about typeclasses, or anything else that solves the same problem. I'm excited about effect systems, which tend to be implemented as types but don't have to be. I hope to learn more about the matryoshka approach, which seems like the opposite thing or at least opposite emphasis to lisp's approach to metaprogramming (which emphasises doing the metaprogramming in the same language as the program itself).

>a lisp programmer might eventually put together a collection of idiosyncratic macros with much of the functionality of Rails, but that wouldn't be a project you could hire someone else to maintain

There is, in fact, such a thing as Common Lisp libraries. See https://github.com/quicklisp/quicklisp-projects

1034 projects with ~48 waiting to be added.

Sure, but is there a one-stop framework that lets you write a CRUD web application as fast as you can with Rails? Was there such a thing in the '90s?

Framework: Caveman2[0] or Ningle[1], which run on Clack[2] (like Rack/WSGI), which supports five backend servers.

Templates: Djula[3], closure-template[4].

ORM: Crane[5], Postmodern[6].

[0] https://github.com/fukamachi/caveman

[1] https://github.com/fukamachi/ningle

[2] https://github.com/fukamachi/clack

[3] https://github.com/mmontone/djula

[4] https://github.com/archimag/cl-closure-template

[5] https://github.com/eudoxia0/crane

[6] http://marijnhaverbeke.nl/postmodern/

Don't forget Weblocks[0]! The learning curve is imposing, but once you climb it, it's a very productive environment to work in.

[0] http://weblocks-framework.info/

No, CL has like 10 different web frameworks or so. I never used any of them, not a big fan of Rails either though.

Talking about the 90s, there was no Ruby. But I guess we could have had Lisp on Lanes in the 90s, if there had been demand.

Common Lisp has about 10 different anything frameworks, which nobody in particular uses in favor of writing their own, mostly because they can. Many of the ten are in fact written into the standard.

Common Lisp isn't so much a "language for writing languages" as a giant bag of everything.

to quote an old quip: you have a problem. common lisp has a clunky solution that's built into the standard, and scheme has a beautiful one that doesn't work in your dialect.

> Talking about the 90s, there was no Ruby.

There was Ruby publicly released for a little under half of the 1990s (from late 1995.) There wasn't Rails until a decade later, though.

We can even fully infer types if you're willing to accept some quite reasonable restrictions on how polymorphic your code is.

Personally, I don't consider "don't pass a polymorphic function to another polymorphic function" and "don't demand a polymorphic function as an argument" to be all that reasonable.

But you only have to infer types because you insisted on removing them from the run-time information.

Sure, you're clearing the road of fallen logs, but they fell there by your own axe.

Having the type there at run time is smarter than working up a lather prior to run time to avoid having it there.

Also, real systems don't have a well-defined "compile" and "run time": the two are intertwined. Compile time is just the compiler's run time, and we need access to the language at that time, and has to be type-safe also, and we would like that comile-time code to itself be compiled (and to have access to the language during that code's compilation ...).

Sure but that can be said of half a dozen languages in the 70s. Unix taking over the world ended up convincing two generations that C was state of the art.

Sometimes I feel like a Roman after the Empire downfall.

So many choices for systems programming gone thanks to UNIX, and now kids think C is the only way.

Yeah, I think that most programming language developments since the '70s have involved putting some ideas from C and Lisp in a blender for a few minutes.

The main advances IMO have come in the areas of IDEs, build systems, and language ecosystems.

I'm a mediocre programmer at best who's done a bunch of Lisp in the past. I loved using it, but these days I rely on Ruby for getting things done because it gives me what I need from Lisp backed up with a package system that just works. I'd rather work in Lisp, but I can make more progress in Ruby.

From the 70's I would rather wish that the mix would be Lisp, Smalltalk, Cedar and CLU. No need for C.

And smalltalk in that mix too.

Can not resist...

This article is fairly misguided. I find it painful that everybody who writes about a Lisp offshoot (Scheme, Clojure, ...) ends up misrepresenting Common Lisp.

To sum up "Why Lisp?" from a CL perspective: CL has pretty much every feature of every programming language around, only that its better designed, implemented and generally more powerful. It's just a poweruser language. Its not just macros, sexps and lambdas. Its also number types, arrays, OOP, symbols, strings, structs, dynamic/lexical variables, lambda lists, multiple return values, on-line disassemble, exceptions, restarts, MOP, metacircular definition, great implementations, great libraries...... the list goes on and on... I surely forgot a ton of great stuff. TL;DR: CL got everything. And this "everything" is designed so well that its extensible and no CL programmer ever needs to doubt that any new feature can be implemented easily in CL.

To correct a few of the wrong statements of OP:

> “Wait—I love state and data mu­ta­tion. Why would you take them away?” Be­cause they’re false friends.

CL is NOT particularily functional. Just because we know how to write good side-effect free code, doesn't mean its a functional language. (We jave SETF after all, failed to mention that aove).

> a syn­tax trans­for­ma­tion in Racket can be far more so­phis­ti­cated than the usual Com­mon Lisp macro.

Outright wrong. The only reason Scheme has weird macro systems is because its a Lisp 1. CL is designed well (thus being a Lisp 2), and thats why its simple but ultimately more powerful macro system can work.

> A macro in Com­mon Lisp is a func­tion that runs at com­pile-time, ac­cept­ing sym­bols as in­put and in­ject­ing them into a tem­plate to pro­duce new code.

This is so wrong I had to write this comment. A macro in Common Lisp is a COMPILER, it accepts arguments and returns an SEXP. It is infinitely powerful, it can do EVERYTHING.

I would take issue with the OOP in CL being better implemented than other languages. Don't get me wrong, the Metaobject Protocol is a feat of modern engineering, but the actual result is an OOP that feels tacked on after the fact. CLOS is very powerful, but when it comes to OOP, other languages are syntactically cleaner in their implementations.

Also, it's a bigger debate, but the lack of hygenic macros in CL is often seen as a negative. Again, this is a larger debate, but certainly a case can be made that CL doesn't have everything designed so well.

I love CL, but it's helpful to remember its limitations.

> syntactically cleaner in their implementations.

Listen to yourself please.

> lack of hygenic macros in CL

Which lack? CL macros are usually very hygienic, being a Lisp-2 and all, having GENSYM.

> I love CL, but it's helpful to remember its limitations.

What limitations? Not having syntactical things you seem to like and supporting non-hygienic macros are not limitations. Quite on the opposite, I believe these are features.


> More generally Common Lisp lacks a coherent web based module repository infrastructure like those of other dynamic languages such as CPAN or RubyGem.

What about ASDF and/or Quicklisp (http://www.quicklisp.org). They work well for me.


ASDF is the build system, Quicklisp the package manager. QL depends on ASDF.

The number of bindings you can add to a symbol in CL is irrelevant to the macro system. In CL, you run into somewhat fewer instances of specific kinds of bugs that Racket's macro system prevents entirely.

The Racket macro system is more powerful, having more information and more control over the system, than the CL system. This additional expressiveness is what enables systems as sophisticated as the Racket contract, class, unit, and type systems to be built entirely with macros.

> CL has pretty much every feature of every programming language around, only that its better designed, implemented and generally more powerful.

The thing is, sometimes what makes a language powerful is the restrictions it imposes. For example, if you know a function is referentially transparent, then you know that calling it multiple times won't cause side effects to happen multiple times.

Having restrictions means that you can rely on those restrictions applying to other code, which might allow you to do things that would otherwise be unsafe.

>> “Wait—I love state and data mu­ta­tion. Why would you take >them away?” Be­cause they’re false friends. >CL is NOT particularily functional. Just because we know how >to write good side-effect free code, doesn't mean its a >functional language.

He didn't claim that CL or even Lips were pure functional though:

"Yes, I know that other lan­guages of­fer func­tional-pro­gram­ming fea­tures, and that Lisps aren’t con­sid­ered pure func­tional lan­guages."

The top reason here could been written - lisp is more expressive. You can find ways to express an idea that make sense now, and which are readable. Macros is a different part of the same idea.


Maybe I'm doing it wrong, but a problem I've had with racket is as you begin to build larger projects, when something breaks it can be quite difficult to find out exactly where the break happened. When you compile Java or run Python, it's almost always immediately obvious what broke.

The way I got around this was to use a methodical TDD approach. Would be a shame if that turns out to be as good as it gets for lisp.

Something I haven't done yet but am interested to get to is attaching a repl console to a running process.

Yes, that is very important. When I watch experienced programmers learn Lisp, they write a couple functions, then go through the pain of debugging that.

They should unlearn that. And put tiny fragments in the REPL. (Or whatever interactive tool you have.) Playing with it.

Since lisp is homoiconic (refreshed on this, see downvotes below) it would be nice if you could type things into a repl and then run a function like this: (persist function-i-just-wrote). And then it would save to a text file. Then you wouldn't need to stuff around with the mouse copying from one buffer to another and the like.

(I expect something similar is possible in emacs, but emacs is not my thing.)

Hmm. I suppose I could just write the persist function myself. Watch this space.

What you just described is pretty much universally handled by editor integration.... I can't imagine anyone actually develops lisp without this.

The editor is hooked directly up to a running image.. a keypress (or two) causes an expression to be executed in the running image... you can flip to the REPL and goof around directly and/or at the same time work from one or more editor buffers and selectively execute code as you desire. You can bust into the debugger whenever you feel like it, etc etc.

Someone else described it better above I think - but this image-based interaction is one of the most attractive things for many - it goes well beyond just having a REPL where you can evaluate code... lisp was basically designed around the idea of interactive development.

If you think this through to a large, complex project - you can have a giant running application, working in production, and still attach to it with an editor/environment and debug it live... that's hard to do with anything else.

this is easy: you only need to wrap your function in the body of a 'with-output-to-file' macro. You can send the output also to a stream, to a string, or use the * as stream name to grab the stdout also if i'm remembering correctly

Is possible.

The manual to this publishing system, discussing how its markup/programming language is implemented as a custom Racket language, is pretty interesting: http://mbutterick.github.io/pollen/doc/

Author is wrong about hygienic macros – they are not more powerful. They are less powerful, and more complicated, in order to enforce safety. Whether this is preferable or not is a matter of debate.

No, the article is correct. Hygienic macros give you strictly more information to work with, enabling them to be more expressive. It's possible to implement unhygienic macros, either at an individual macro level or a whole new macro syso, using Racket's macro system.

I'm thinking you're right in the theoretical sense, but I'm right in the practical sense. Every time I've looked at hygienic macro systems, it just seems like now I'm jumping through extra hoops to overcome a more restrictive way of doing things. Is that extra information you cite typically useful for anything besides safety? Can you give an example?

Yes, the information is useful for lots of things. For example, tracking the binding information in the data is how DrRacket manages renaming, how Scribble (the Racket docs tool) hyperlinks every identifier in every example to the correct documentation for the identifier, how Typed Racket tracks type metadata for identifiers, ...

Most of my development time is spent on correctness, safety, and refactorings.

If the extra information helps those at the cost of not one of those, it's almost always the right choice for me to use.

It seems most people here have never used (and not tried) racket.

I decided to use racket for my little sides projects, as replacement for scala and clojure.

I choose it because it was clear for me i can't stand limitations other language impose to me in way of style and boilerplate, the racket macro (aka syntax transformer) system is the most advanced i know to reduce the boilerplate to a minimum and so just write what i want to express. In facts i rarely write macro because writing a good macro demand you take care of errors syntax, i am lazy in the bad meaning of the term.

I choose it because it's dynamic typed and i get more convinced that type go on your way most of the time (expect complex algorithms)(i write little projects, so refactoring argument is out). It enable me to write code and eval it on the fly with geiser (using enter! on the back), after eval the new function i test it in the repl, hack until the function meet the requirement copy paste from repl and boom i get a unit test. Because it as eval and it will become handy one time in your programmer life for sure.

I choose it because of it's sexpr syntax, as a heavy user of emacs i know that other syntax is a pity.

Also because it has (and i use):

1. llar parser (implemented through a macro).

2. A pattern matching nothing to envy scala or clojure deconstructs.

3. An optional type system.

4. A contract system.

What i find hard as a new comer (to racket, not as a programmer already now scala, clojure, half of c++ :), php) is

1. The broadness of the features the language offer, which feature to use e.g.: class or generics.

2. The documentation is rich but lack of examples for the common cases, so you need to read the doc of the function (sometime it's huge).

3. Understanding how racket module works is quite hard and you have the documentation, if you don't plan to play with the macro expander (the stuff that run your macro) and some dynamic features you don't really need to.

4. You need to 'register' the errortrace library if you want a stacktrace, quite a surprising behaviours for me.

My opinionated conclusion:

Racket is the best language design i ever see, it's hard to learn but make you feel learning an other language will just become to learn a new sub-optimal syntax. Sadly the ecosystem is lacking libraries and peoples and i am not helping in this way.

I'm glad you like Racket, and you're right about some of the weaknesses, especially in providing guidance and examples for how to do things.

I have questions about points 3 and 4. For 3, do you mean it's hard to figure out how the implementation of something works, especially when that something is a complex macro? If so, I can't disagree.

For 4, if you just run Racket programs at the command line, you should get stack traces (although less nice than the ones errortrace provides). Do you not get those?

For 4 if i run from the command line (racket 5.93) i just get the contract violation message, which don't provide positional informations, it's not usefull when you have a list contract error with dozen of uses in a file.

For 3 i mean the words like 'visiting', 'instantiated', the phases, the syntax transformer there are many things to grasp. For instance i used to believe that phase 1 bindings where the functions executed by the expander instead of the one defined by define-syntax.

For complex macro i just expand it that's fine.

Other point the continuation is hard to grasp, i really understand when i see http://www.infoq.com/presentations/continuations-web-os, the key point that help me is the fact he mentions continuation capture also the exit.

Another flaws i didn't mention is messages of typed/racket could be hard to understand especially one involving parametric function, this problem occur time to time in the mailing list. I know you take this kind of problems seriously.

A point why i like racket is because it's lexical scoped but also let me precise the portion i want more dynamism in with parameters or with eval + namespace.

Racket also miss some tooling the one come in mind is coding standard checker and formater, it could be done with free-identifier=? and have some rules. New macro could write their associated rule as metadata in a submodule. Is it doable, i mean technically not the fact to expect macro writer will write such rules?

Thanks for the feedback. Certainly there are lots of complicated concepts to learn in the macro system.

On 4, if I run this program: https://gist.github.com/8fbc12877e2639c5f94c I get a backtrace like this:

  [samth@huor:~] r ~/tmp/x.rkt
  /: contract violation
    expected: number?
    given: '(1 2 3 400 500 600)
    argument position: 2nd
   other arguments...:
     /home/samth/tmp/x.rkt: [running body]
There's no other stack frames because `average` is inlined, but if you have more complicated functions you should get more stack frames.

Ah, someone with Clojure experience. I wanted Clojure to be my next language to learn, so now I'm curious why you ditched it in favor of Racket?


- Whats the concurrency story for Racket?

- If I wanted to code my REST endpoints in Racket, are there competitive libraries available?

I ditch clojure because i wanted something with a more meta-language capabilities (syntax object + racket macro), it also as a llar parser and a lexer lib which i needed when i made the choice.

For the libraries you can checked out yourself at http://pkgs.racket-lang.org, for what i do i don't need one of the library.

For concurrency keep in mind that racket use green thread where clojure use jvm thread so OS thread in openjdk.

I don't what you want to do with clojure, but i think it is a best choice than racket. Clojure as a very opiniated way of how you should write a program and this view reflects in propably every clojure library. The documentation in clojure is lacking but it provide the implementation (on the web page) so the learning is imho faster especialy if you are new commer to functional programming. The other advantage is the leverage it get from the jvm in term of performance use of the java lib.

Regarding the Racket concurrency story, it has places; http://docs.racket-lang.org/reference/places.html.

"A place is a parallel task that is effectively a separate instance of the Racket virtual machine. Places communicate through place channels, which are endpoints for a two-way buffered communication."

So places are somewhat like Erlang processes.

Distributed Places are also interesting: http://docs.racket-lang.org/distributed-places/index.html

Places are about parallelism. For concurrency specifically, the most interesting features are threads, channels, and futures and can all be found here: http://docs.racket-lang.org/reference/concurrency.html

Distributed places is pretty cool, however!

Thanks. So basically we get most of what Java has to offer plus thread mailboxes, which is nice. Not sure what to make of Places though - zeroMQ exists and has Racket bindings, is there something Places is better at?

Some practical features I enjoy in CL:

1. Conditions and restarts: As far as error handling in programs go this is the most rock-solid system I've encountered. You can tell the system which bits of code, called restarts, are able to handle a given error condition in your code. The nice thing about that is you can choose the appropriate restart based on what you know at a higher-level in the program and continue that computation without losing state and restarting from the beginning. This plays well with well structured programs because the rest of your system can continue running. Watching for conditions and signalling errors to invoke restarts... it's really much better than just returning an integer.

As a CL programmer using SLIME or any suitable IDE, this error system can throw up a list of appropriate restarts to handle an error it encounters. I can just choose one... or I can zoom through the backtrace, inspect objects, change values in instance slots, recompile code to fix the bug, and choose the "continue" restart... voila the computation continues, my system never stopped doing all of the other tasks it was in the middle of doing, and my original error was fixed and I didn't lose anything. That is really one of my favorite features.

2. CLOS -- it's CL's OO system. Completely optional. But it's very, very powerful. The notion of "class" is very different than the C++ sense of struct-with-vtable-to-function-pointers-with-implicit-reference-to-this. Specifically I enjoy parametric dispatch to generic functions. C++ has this but only to the implicit first argument, this. Whereas CLOS allows me to dispatch based on the types of all of the arguments. As a benign example:

    (defclass animal () ())
    (defclass dog (animal) ())

    (defgeneric make-sound (animal))
    (defmethod make-sound ((animal animal))
      (format t "..."))
    (defmethod make-sound ((dog dog))
      (format t "Bark!"))

    (make-sound (make-instance 'animal))
    (make-sound (make-instance 'dog))
Will print "..." and "Bark!" But the trivial example doesn't show that I can dispatch based on all of the arguments to a method:

    (defclass entity () ()) ;; some high-level data about entities in a video game
    (defclass ship (entity) ()) ;; some ship-specific stuff... you get the idea.
    (defclass bullet (entity) ())

    ;; ... more code

    (defmethod collide ((player ship) (bullet bullet))) ;; some collision-handling code for those types of entities...
    (defmethod collide ((player ship) (enemy ship))) ;;; and so on...

    Ship::collide(const Bullet& bullet) {}
    Ship::collide(const Ship& ship) {}
Where collide is a virtual function of the Entity class requiring all sub-classes to implement it. In the CLOS system a method is free from the association to a class and is only implemented for anyone who cares about colliding with other things.

The super-powerful thing about this though is that... I can redefine the class while the program is running. I can compile a new definition and all of the live instances in my running program will be updated. I don't have to stop my game. If I encounter an error in my collision code I can inspect the objects in the stack trace, recompile the new method, and continue without stopping.

3. Macros are awesome. They're like little mini-compilers and their usefulness is difficult to appreciate but beautiful to behold. For a good example look at [0] where baggers has implemented a Lisp-like language that actually compiles to an OpenGL shader program. Or read Let Over Lambda.

One of the most common complaint I hear about macros (and programmable programming languages in general) is that it opens the gate for every developer to build their own personal fiefdom and isolate themselves from other developers: ie -- create their own language that nobody else understands.

Examples like baggers' shader language demonstrate that it's not about creating a cambrian explosion of incompatible DSLs... it's about taming complexity; taking complex ideas and turning them into smaller, embedded programs. A CL programmer isn't satisfied writing their game in one language and then writing their shaders in another language. And then having to learn a third language for hooking them all up and running them. They embody those things using CL itself and leverage the powerful compiler under the floorboards that's right at their finger tips.

Need to read an alternate syntax from a language that died out decades ago but left no open source compilers about? Write a reader-macro that transforms it into lisp. Write a runtime in lisp to execute it. I've done it for little toy assemblers. It's lots of fun.

... this has turned into a long post. Sorry. I just miss some of the awesome features CL has when I work in other languages which is most of the time.

[0] https://www.youtube.com/watch?v=2Z4GfOUWEuA&list=PL2VAYZE_4w...

Great concrete examples, a couple of things to add about them:

Handler, conditions and restarts are much more expressive than in other languages, you can signal a condition (the analogue of throw) with restarts attached to it which only if you desire to handle it in another part of the program where(/if) you decide to handle the conditions(the analogue to try). It is analogue to throwing an error with catch statements attached and when you write the try statemet the catch statements attach to the throw statement are made available to said catch. Kent Pitman explains it much more eloquently than me[1].

Also to add how well thought out CLOS dynamic redifining of classes is, when one redifines the change is garaunteed to happen between redefinition and the next time you access said class, as to give leeway to the implementation to decide the best strategy to do so (for example if redifining a class with 10K+ instances if you do it all at once the system may become unresponsive). Also if the redefinition isn't just adding or removing slots you can provide a function to handle the updating of slots for the redefinition.[2]

[1]: http://www.nhplace.com/kent/Papers/Condition-Handling-2001.h...

[2]: http://www.lispworks.com/documentation/HyperSpec/Body/f_upda...

"CLOS allows me to dispatch based on the types of all of the arguments."

But can it dispatch based on the return value?

Given that you can define your own custom method dispatch mechanisms in CLOS I'm pretty sure you could do something like this - you'd have to define the return type of each method somehow (can you use ftype with CLOS methods?) and have a way of specifying the return type you want back - but none of that sounds hugely difficult.

Mind you - its 20 years since I was paid to write Lisp/CLOS so take this with a grain of salt :-)

Dispatch what where based on the return value of what?

Can you explain this with an example of what the syntax would look like, along with a description of the fantasy semantics?

Of course if a specializable argument to a CLOS generic function call is itself a function call expression, then it is evaluated to its return value, and the type of that value is dispatched on.

The return value of the generic function itself doesn't dispatch anywhere; return "dispatches" to the caller.

We can think about dispatching a different method based on some type which the caller expects. A way to do it in CLOS is to add an extra parameter, perhaps a class symbol. CLOS can dispatch on symbols via the (eql ...) specialization, so we can do something like (foo 'integer arg) to call that specialization of foo which returns integers. This won't be deduced from context; evaluation of a form is context free in ordinary Lisp. Doing that sort of thing requires a code walker to transform the program.

Dispatch a function based on the value it is expected to return. The version of this that exists in some real programming languages at the moment (no "fantasy semantics") is dispatching based on the inferred type of the expression. Depending on your POV, that's quite arguably a part of the value returned. It obviously can't be chosen dynamically by the call, or we would be dealing with "fantasy semantics" (or a time machine).

"fantasy semantics" simply refers to the imagined semantics we want to have: the functional requirement. The semantics which isn't real right now, but we would like to implement.

(Not sure why I got a downvote. Is there a possible perception that I wasn't being civil?)

It occurs to me that some dynamic dispatch can take place based on what the method call returns. Suppose we split the computation of the function into two functions: the function proper, and a fixup which is applied to the return value.

The main function's method could have a CLOS :around method wrapped around it which obtains the return value of the primary method (via en explicit (call-next-method) call, and then invokes the fixup method on the argument. The fixup method is then dynamically dispatched on this return value, and possibly provides the ultimate return value.

Of course, this is different from dispatching a method based on the context in which it occurs. Still, no time machine needed.

I did think "fantasy semantics" was meant to imply it was unlikely I could come up with coherent semantics for it. I wouldn't have considered that sufficient incivility to earn a down-vote, though.

Splitting functions is an interesting idea, though unless you're passing in context as an argument it'd be another way of dispatching on the same arguments (with, presumably, more user control but probably also more boiler plate).

Yes, in this conception of what it means to dispatch on the actual returned type, the caller doesn't know, in the same sense that it doesn't know what sequence of methods is dispatched based on the arguments, either! It calls a generic function, which does some magic internally and produces a return value (or various side effects or both).

By the way, it appears I can explain why I had used the word "fantasy": I have been recently involved in some discussions about Gödel's Incompleteness Theorem. That stirred up distant memories of reading Hofstadter, dredging up some of his vocabulary.

This is an example of dispatching on return type in Clojure - which is similar to Common Lisp.


(The Haskell purists point out that the dispatch isn't determined at compile time - form your own opinion about whether they have a point).

I'm not a Haskell purist, but this isn't really dispatch on return type, this is manual inspection of manually provided metadata that just happens, in this case, to be (intended to be) a return type.

The return type isn't enforced, incidentally:

    user> (defn ^String my-function [^Integer foo] (apply concat (repeat foo "x")))
    user> (my-function 3)
    (\x \x \x)
Whoops, I meant (apply str ...).

You also can't use the metadata on a function introduced by (defn ^ReturnType ...) in more or less any useful way, if you don't have a reference to the var itself. If you pass it to a function, you basically can't recover it:

    user> (defn do-something-interesting-with-return-type [f] (println "here's my metadata:" (meta f)))
    user> (do-something-interesting-with-return-type my-function)
    here's my metadata: nil

You can tell that something fishy is going on in the SO answer because the author has to pass in the function by name, #'return-string.

That sounds like a feature that would both be awesome, and annoy the crap out of me.

Are there examples where this actually leads to clearer code in the wild?

Several common typeclasses in Haskell have methods that are polymorphic in the return type, and I find it extremely useful.

The bounds of a bounded type:

    minBound :: Bounded a => a
    maxBound :: Bounded a => a
Converting an integer to an enumerated type:

    toEnum :: Enum a => Int -> a
Casting between numeric types:

    fromIntegral :: (Integral a, Num b) -> a -> b
Parsing a string into a value:

    read :: Read a => String -> a
An empty instance of a container that can be concatenated:

    mempty :: Monoid m => m
An effectful action that simply returns a constant value:

    return :: Monad m => a -> m a
It leads to clearer code because you don’t need to specify so many types—either it’s clear (and inferable) from context, or you want to write generic code. For example, with Monoid (mempty & mappend) I can implement a function called mconcat, which concatenates a list of stuff:

    mconcat :: Monoid m => [m] -> m
    mconcat = foldr mappend mempty
Now I can join a list of strings, concatenate a list of lists, union a list of sets, or even the optional versions of any of those:

    mconcat [Just "foo", Nothing, Nothing, Just "bar"]
    Just "foobar"

Rust also allows polymorphic return types.

    trait Bounded {
        fn min_bound() -> Self;
        fn max_bound() -> Self;
Which can be used like so. Using type inference.

    fn add_min_to_set<T:Bounded, S:MutableSet<T>>(set: &mut S) {
Or you can be more explicit

    let mut i: u8 = Bounded::min_bound();
    while i < Bounded::max_bound() { println!("{}", i); i += 1; }
Edit: spacing, s/++/+= 1/

Sadly, my own deficiencies in being able to read Haskel are keeping that from being a compelling argument. I do intend on visiting it more later. Just, right now, I prefer the dead simple to parse lisp over this.

Translating, with some minimal explanation:

    name :: constraints_for_type => type_of_name
says "Thing named 'name' has type 'type_of_name' with constraints 'constraints_for_type'"


    minBound :: Bounded a => a
    maxBound :: Bounded a => a
"minBound has type 'a' for any 'a', so long as 'a' is an instance of Bounded"

These two are more values than functions, but the polymorphism happens the same way. If you use them where some particular bounded type is expected, that's the type they evaluate to. If you use them where an unbounded type is expected, you'll get a compile error.


    toEnum :: Enum a => Int -> a
"toEnum has type 'Int -> a' (that is, a function from 'Int' to 'a') for any type 'a', provided that 'a' is an instance of Enum"

i.e., Convert an int to the Enum type that the context is asking for.


    fromIntegral :: (Integral a, Num b) => a -> b
"fromIntegral has type 'a -> b' (that is, a function from 'a' to 'b') for any types 'a' and 'b' where 'a' is an integral type and 'b' is a numeric type"

The tuple syntax just means that all these constraints need to apply. I'm not actually certain why the syntax requires it.


    read :: Read a => String -> a
"read has type 'String -> a' for any type 'a', provided that 'a' is an instance of Read"


    mempty :: Monoid m => m
"mempty has type 'm' for any type 'm' (different letter is just stylistic - m for monoid), provided that 'm' is an instance of Monoid"

This is particularly interesting when you start doing polymorphic things with fold and friends.


    return :: Monad m => a -> m a
"return has type 'a -> m a', so long as 'm' is a Monad"

Here we see a "higher-kinded type" - m is a function at the type level that takes a type argument and produces another type, like a C++ template.

e.g. List parameterized by Integer gives us a List of Integers (List is spelled [] in Haskell)


    mconcat :: Monoid m => [m] -> m
"mconcat is a function from any 'list of m' to a single 'm', provided 'm' is an instance of Monoid"

    mconcat = foldr mappend mempty
"we define mconcat to be the right fold of mappend over the list, using mempty as our initial value"

I'm happy with myself for at least mostly getting what those were. I think I'd have to see more uses to really see the benefit, though.

I am curious on minBound and maxBound. They seem to be the same... What distinguishes them? Or, you are just saying these are two values that are defined. And they can only be given a value of a type that can be bounded?

Also, thanks for expanding!

Hopefully hornetblack's answer shed some light, but the most important bit is that the most of the above were just type signatures. If they're part of a typeclass you would define them for a particular type in the instance. Otherwise, you'd define them generically using only other functions defined to generically work with the same constraints (or looser).

So in the case of minBound and maxBound in particular, you'd define them appropriately for a particular type when you declare a type to be Bounded.

minBound and maxBound are typeclass functions. Typeclass's are a similar to java interfaces.

So the Bounded typeclass is defined as

    class Bounded a where
        minBound :: a
        maxBound :: a
If you have a function and you want the argument to be Bounded you write

    allLessThan :: (Enum a, Bounded a)=> a -> [a]
    allLessThan x = [minBound..x]
Here `a` is a generic type. That implements the Enum and Bounded typeclass. You could just write `allLessThan x = [minBound..x]` and Haskell with infer the type classes by itself.

To implement the typeclass you use `instance`, like so:

    data MyType = A | B | C | D
    instance Bounded MyType where
        minBound = A
        maxBound = D

A good list of some interesting 'day to day' benefits of Lisp. Maybe that is something that would appeal to beginners especially.

From my perspective Lisp is a powerful language because of its genesis in research. The question wasn't "How do we make a tool to make this hardware do what want?" but rather for a research goal.

If you want to read the actual original Lisp paper look up: Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I John McCarthy April 1960

Paul Graham covers it nicely in this essay, especially the "What made Lisp different" list about 1/3 in http://www.paulgraham.com/icad.html

Lisp has had expressiveness we're only recently seeing in popular mainstream languages now. It has to do with the design, the simplicity, and how Lisp expresses problems. I've often heard it described as "the language gets out of your way." That's why Lisp.

Thanks for the link (it's one of PG's I hadn't read yet.) That being said, I really didn't like the part where he implied that if 1 line of Lisp can replace 20 lines of C that it also means that a feature can be developed 20 times faster in Lisp than C. Are you a Lisp programmer? I've only used it a bit, but I didn't get the impression things would scale that way for feature development.

Interesting point. Particularly I believe that the 'coding' part isn't the real job, I'm not just an overpaid typist. So why is concise expression good?

For the record, I am NOT a day-to-day Lisp programmer, but I've played with it a little and read about it some.

I think it's because its about expressing your problem clearly. It's not "20 times less lines" so much as it is "20 times clearer expression". Not that it's exactly 20x, but it seems better. When you express things more clearly, and structure your code more like your problem, and less like your programming language, you shed a lot of accidental complexity ( See Out of the Tarpit - http://shaffner.us/cs/papers/tarpit.pdf ), so managing the complexity of your project is easier, which means fixing bugs and adding new features is also easier. I think this is the route being implied when they say you can develop features faster. I'm not so certain you'd see the benefit in a particular feature, but over time.

Note that a lot of the modern languages are picking up many of these expressive features, closing the gap there as well, which is wonderful.

If you check the research on the subject of development speed, it suggests a given programmer generates around a certain number of pages of code in any language he/she is proficient in. And if the programmer learns another language, they will write about the same number of pages when they get proficient. (Apply NaCl. I assume the real statement is that a given programmer generates a given amount of complexity in a work day.) [This paragraph of background was not needed, you seem aware of this point.]

You can, simplified, see Lisp as the grandfathers of the scripting languages. (But with parse trees as syntax, efficient compiler decades ago and so on.)

The point with the scripting languages is that development speed is so much quicker than e.g. C, for many cases. Lots of less code is needed. Personally, I went to scripting languages from C originally because of that the sheer productivity is so fun. I would have gone Lisp if there were jobs.

Why Lisp? That is understood.

Why Racket? From an ignorant outsider's perspective, all Lisps seem to be more or less interchangeable when it comes to the language. They only differ in the details, and each seems to be about as difficult to learn as the other. Although this article does make somewhat of a case for specifically Racket, it seems to be a rather weak one - tools are nice and some language details are nice. But the same general arguments can be made for other Lisps, most notably Clojure. It seems to me that Clojure is a lot more practical: it has many good libraries in both Clojure and Java, it has some great tools, there's a lot of momentum, and it can be deployed everywhere (including the browser).

So, being an ignorant outsider, is there any reason the Lisp I should learn isn't Clojure?

> all Lisps seem to be more or less interchangeable when it comes to the language

Definitely not true.

It may be true to an extent for implementations of one particular lisp, like Common Lisp or Scheme, but even then there are very real, and significant differences. Simple and regular syntax makes it actually much easier to build many different semantics, and that's what lisps are about.

Racket has less libraries than Clojure and no access to JVM ecosystem. On the other hand it's FFI capabilities to C are very nice and easy to use. Racket macro system - syntax-parse (with syntax-rules and syntax-case and... defmacro for simpler cases) - is the most powerful there is right now. Combined with access to the reader and ability to make reader macros this makes Racket much easier to extend and transform than Clojure (by design). Which takes us to the biggest advantage of Racket, which is it's actually a family of languages, both s-exp based and with traditional syntax, built for specific purposes, which you can mix and match easily. Typed Racket and lazy variant, and FrTime, and Datalog - and more - are examples of this. Then you have module system which is a game changer for unit-testing, object system which is more powerful than most (besides CLOS, but there's always Swindle), the most advanced contracts system among all PLs, and of course quite fast JIT compiler and many more.

Clojure is simpler than Racket, but provides a set of highly opinionated defaults which people are comfortable with. I think Clojure and Racket are like Sublime Text and Emacs - Emacs is strictly more powerful, but it requires much more work to use this power well, while Sublime works well from the get go. But in the end both are infinitely better than Notepad and sooner or later you will come to know both, which is what I suggest you should do.

Thanks for the clarification. Is it worth considering others besides Racket and Clojure as a first Lisp? Are there Lisp equivalents of Vim and Atom as well?

One path you could take is to start using a lisp for the platform you're already familiar with. There is Hy for Python, LispyScript (and others) for JavaScript, LFE for Erlang and more. These are "lisps" to varying degrees - they all are written with sexps, but they typically preserve the semantics of underlying language, which makes it really easy to pick up and get over the initial hurdle of "ZOMG parens!!!".

I had good experience with Chicken Scheme, which is both much simpler system than Racket, but also quite rich in features and add-on packages and it also produces fast native executables. I suppose you could make it your first Scheme.

Of course, Emacs users should just hack in Elisp for a couple of months, then switch to Common Lisp. I wouldn't recommend going straight for CL, unless you're going to follow some really good book (like pg's "On Lisp" or maybe "Land of Lisp"), because it's very easy to drown in CL capabilities.

You should decide what you want to build - don't learn a lisp just for the sake of learning, build something with it! - and then choose the best lisp for this particular project. I'm sure lispers everywhere will be happy to help you choose (just before the thread evolves into massive flame-war, again... ;)).

Yes, Common Lisp. Gradually-typed, fast, compiled, standarized, stable, and with multiple mature implementations.

Emacs+SLIME, Common Lisp (I like SBCL).

Racket has probably the best macro system out of all existing lisps available to date. Plus it has concept of extensible languages via #lang. If you care about lisp-style macros and meta-programming that's probably the main reason why I would recommend Racket.

Clojure macros in comparison are very ascetic. You can get stuff done in them but it's not a strong feature of the language in my opinion.

Racket has strong multi-platform support of its own. I actually chose racket over options like clojure for building a game, because I wanted to be able to be able to easily and quickly package one codebase for deployment on windows (running in a window), OSX (running in a window) and linux (running in a terminal). From the standard install, you can quickly package code to a stand-alone binary. With Clojure, some users would need to install a JVM. Clojure's great though.


Structure and Interpretation of Computer Programs by Abelson and Sussman, for the uninitiated.

It's one of the best introductions to basic and advanced (and then even more advanced) programming there is. This book, along with a series of "The Little Schemer", "The Seasoned Schemer" and "How to design programs" use Scheme as a language, and Racket makes it particularly easy to follow them by providing special environments (#langs) crafted for them.

Online interactive version of SICP http://xuanji.appspot.com/isicp/ for great win.

Racket vs Scheme vs Common Lisp is like C# vs Java vs C++ vs ...

RacketCon is being held in Saint Louis, Missouri, USA the day following the Strange Loop 2014 conference (also in Saint Louis):


I hope to see some of you there!

Check it out. For five years I have been developing a tool called TXR. It's a "Unixy" data munging language that is ideally suited for programmers who know something about Lisp and would like to move away from reaching for the traditional stand-bys like awk, sed, perl, ...

You do not have to know any Lisp to do basic things in TXR, like extracting data (in fairly complicated ways) and reformatting it, but the power is there to tap into.

In TXR's embedded Lisp dialect, ("TXR Lisp"), you can express yourself in ways that can resemble Common Lisp, Racket or Clojure.

You can see a glimpse of this in this Rosetta Code task, which is solved in three ways that are almost expression-for-expression translations of the CL, Racket and Clojure solutions:


Or here, with syntax coloring:


If you closely compare the original solutions, you will see that certain things are done more glibly in TXR Lisp.

> If Lisp lan­guages are so great, then it should be pos­si­ble to sum­ma­rize their ben­e­fits in con­cise, prac­ti­cal terms.

His list is concise but man did he take a while to get to it!

Seriously though. The introduction was super relevant as I have wondered the exact same question about Lisp myself. What features make it so praise-worthy? Maybe X-expressions isn't a core feature for everyone to appreciate, but the fact that everything is an S-expression is an understated value. People complain about its syntax, but alternate versions (so many reincarnations of parentheses-less Lisps) have never caught on.

The thing is, Lisp is no longer unique in its feature set, and languages with more standard forms of syntax have incorporated some of its features. But it is uncommon to find all of these listed features in one language. In the domain of data analysis where I do most of my work, it still makes me sad that XLISP-STAT has been supplanted by other languages which leave the user wanting.

By feature set you mean sequences, map/filter/reduce and such ?

I felt there was more to Lisp than that. It was the root of the ML/function-based family, which deepened the recursive typed logic McCarthy talked about in his early papers.

The first-class function and functions as modeling unit, giving composability, and domain embedding as first class.

Agree with some of the earlier comments: the diamond-shaped thingies inserted are really a nuisance and breaks the reading flow.

Indeed. Whenever you break convention like underlined and colored text for links you in general need to have a good reason.

Maybe only because it's unfamiliar. After all, other punctuation doesn't break your flow. And trying to figure out whether something is a link or not also breaks flow.

The real question is why someone would change something most people already getting used to. I don't see any compelling reason to change the link in this way, do you?

Does anyone else find it difficult to highlight things on this page? Specifically, 'kvetchery' which is found in the third paragraph.

I believe the author is the one responsible for the facelift of Racket's documentation. He may belittle his own lack of formal programming education but I am thankful for his design chops.

I am learning Perl's FP features and really liking it, and teaching myself common lisp. I enjoyed the article quite a lot actually.

I find it is very hard to define functional programming for many people but this is what I have come to explain to people:

Functional programming means thinking in terms of mathematical functions in the f(x) sense. Once you get that basic promise, that for any given input you have a single correct output, then it transforms the whole way you think about and designing your software.

The better I get with lisp, the more everything else changes. I may have to try Racket.

What about spending a few minutes exploring P6 and providing feedback about it to Larry Wall et al on #perl6?

One easy way follows (for you or anyone else reading this who is curious about P6). Preferably between about noon and midnight GMT (so you're more likely to catch Larry Wall, nick TimToady) visit the freenode IRC channel #perl6 (eg go to https://kiwiirc.com/client/irc.freenode.net/perl6 and enter a nick) and enter 'hi'. There are on channel evalbots and folk who will be happy to show you a well designed and continuously improving Perly blend of FP, OO, etc.

Yesterday's log is at http://irclog.perlgeek.de/perl6/yesterday. Hope to see you in the log. :)

I think your explanation is excellent for getting the point across. All other nice things that we get in FP derive from the "mathematical functions" fact you stated. It's also good in the sense someone can then ask you "how is it that using mathematical functions transform the way you think" and now you have their attention and interest. :)

I tried somewhat Pollen and it's kind of fantastic, the only thing is getting to work with a new project in a language that I don't know very well is difficult for me. That's why I started a similar project in Python, watch https://warehouse.python.org/project if you interested /azoufzouf/

I just pushed the code. You can have a look at it @ http://amirouche.github.io/azoufzouf/ there is all you need to know and ⵣcode{pip install azoufzouf}.

With that markup, it's easy to plug any function you want. That's right. It's not nested and is inspired from unix philosophy, I dare to say. I tried several time Sphinx, docutils alone and other markups they don't deliver as much as this one (sofar).

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact