Hacker News new | past | comments | ask | show | jobs | submit login
A philosophical difference between Haskell and Lisp (2015) (chrisdone.com)
238 points by behnamoh 11 months ago | hide | past | favorite | 181 comments

There is a strange... kind of poetry with Haskell. It is like math on wheels, math applied to procedures, math with... time.

Its appeal to me is like the appeal of math to me, not like real analysis math but abstract algebra math. The beauty, the purity of mathematics of younger days that once became lost after encountering the sad complexities of the world. Understanding every little aspect and being able to prove every part is now a luxury, and interfacing without real understanding is the more practical approach in the turbulent waters of poorly connected technological and social systems.

But still there is hope and there are dreams. We like to drench ourselves in dream qualia sometimes, and Haskell and pure math are that medium. The abstractions of it, the consistency of it, the purity of it... When Haskell is called a pure language, it almost goes beyond the static definition of functions being pure, and describes the general feeling that occurs when writing Haskell. You feel pure. You feel like you are taking these small parts and creating greater parts in an elegant buildup of abstractions, traversing one level higher and one level lower at your whim.

Lisp... maybe it’s the parentheses, maybe it’s something else... it never really caught onto me like Haskell did. Haskell feels pure and dream-like and perhaps unsuited to the world where (if you really get down to it) abstractions and types are just useful ‘human’ inventions and unfit for every usage. The world is for getting down and dirty, and mathematics, or at least the pure side of it, really isn’t. The representative mathematics of Haskell is Category Theory, and it is just as far from the level of “real” as it can be. More abstract than abstract algebra, if you will.

Abstraction itself is an intellectual operation that is also rooted in emotional detachment. Perhaps Haskell represents that kind of ideal in a modern world where practicality pays before purity.

> There is a strange... kind of poetry with Haskell. It is like math on wheels [...] pure and dream-like

For me too there is a feeling of poetry... but for me it's more the nails-on-blackboard grating of awful poetry, and the ice-pick-to-ears screeching of abrading subway wheels. :)

Lisps merely inspire me towards elegant craftsmanship. But Haskell tempts me to dream of more. It's my fault, but ouch.

Dreams of math made manifest. Of materializing a lattice of theories, each with types and operators and laws, simply extensible and composable - a push-out lattice. Dreams of programming that's more math than plumbing. And maybe someday, some future Haskell Next will permit that flight, that dance. But that's not Haskell.

Someone I know, will pull up the deep math of a problem domain, model it in Coq, and mentally compile it to Haskell, extending GHC internals as needed to better serve as compiler target. For a wetware compiler that gets only very limited support from current tooling. I can't do that. Not even close. I need a language and tooling that's on the same page as me, to serve as an extended self. And for this, that's not Haskell.

Many years back, someone in a conversation suggested a core skill of a professional C++ programmer, was cataloging the things which seem like good ideas, and might even serve well in some other OO language, but which tended to go badly in that era's C++. Along with many baroque workaround to salvage others.

Fewer years back, I'd a conversation about using Haskell in education. We talked of alternate preludes to remove historical clutter, and extensions to remove historical limitations. And turned up a fun divergence - he thought of learning Haskell as fairly easy, and I thought of it as ghastly hard. For his default context was programming in the small - vocabulary, functional mindset and such. And mine was... you the start at the other end, with the dream, the no-extraneous-complexity pure essence of what is needed... and now you have to learn the zoo of baroque work arounds, and crippling limitations, by which Haskell fails the dream.

God, reading these comments generates inspiration in me to create a pristine, beautiful language for you all to lust over. Lolol

That might ironically go better starting from a Lisp-like dynamic language. "Just add types". Because the static typing side is hard, advancing one paper and thesis at a time. The poorly-funded multi-decade creeping installment plan of progress.

Even with category theory, the overwhelming emphasis is on using it for proof rather than for its expressiveness. A fun example... from a very collegial course in pre-Covid January, sniff... Categories work over PreSets - no predicate of member equality required. But say I claim to be handing you a PreSet. You can't verify that claim. Your ability to prove anything is very limited. Sad for mathematicians. But fine as API. But that's not where attention is.

Paid haskeller here.

That feeling you describe quickly goes away when you use the language everyday. Once that happens, you can start weighting it as a tool.

And overall, my assessment is that purity and practicality are not competing aspects. I would say that purity conttibutes to the practicality of haskell.

I followed a (pure) mathematics education, and I totally dig lisp. On the other hand, I abhor haskell and hate almost everything about it!

This comment would be more informative if you briefly explained why you abhor it and hate almost everything about it.

Yep, sorry.

This is a matter of feeling, completely irrational, but just my experience.

When I program in lisp I get the same feeling when I'm solving an ODE by hand, or a Diophantine equation, or designing a numerical method to approximate the solution of a PDE, or finding the Euler-Lagrange equations of a physical problem. The thing is real and it gets shit done really fast; it is just exhilarating. Moreover, lisp macros are so dirty and fun that using them feels like kinky sex.

Haskell is like category theory. Sure, it is pure and general, and you can create set theory and the rest of math from it. But it is certainly the most boring thing that I can think about.

But hey, whatever floats your boat.

Do you prefer analysis over algebra? How do you eat your corn?

(I prefer algebra and Haskell. But I like Lisp well enough.)

I like geometry first, then algebra and analysis. I work mostly by teaching analysis and doing some research.

Lol, I separate it from the cob using a knife and then I eat it with a fork. My 8 year old son taught me that technique because he says there is less risk of fibers stuck in your teeth, which he hates.

Now I want to really learn math. Is the aesthetic experience comparable to programming? Probably greater. I need something where I can invent if I want to get my fix. As a layperson, math never scratched that itch entirely (aside from proofs which as we know are basically programming).

Sounds more like applied math...

Nothing wrong with applied math. It's still math.

From the grandparent post:

> I followed a (pure) mathematics education...

Oh, OK, that's what you meant. Makes more sense.

I'm not so quick in judging people by how applied they are.

We mathematicians always try to come up with new pure fields, but then some other people always come and find some applications. Just see how cryptography and computer science sullied our beloved number theory!

(I mostly focused on the computer-science-y parts of math in my studies. Thinks like linear optimization and combinatorics. Not sure whether to count that as applied or pure. I guess I might get a purity pass, because we only ever proved our algorithms correct but seldom implemented them.)

Is it the type system?

Me too and I like Haskell the most just because I feel like it's closer to math than clojure.

If I want to solve a problem I can think of it a a series of maps f: X -> Y and once I solve it on paper "translating" into Haskell just feels natural.

It's also super easy to do that in clojure, maybe I'm just baised because I found and learned Haskell first.

Now days I use clojure for everything and it's amazing. I never enjoyed programming so much until I discovered the functional paradigm. It just feels right.

I also followed a pure maths education. I like lisp (in the CL/emacs lisp family more than scheme which tends to have smaller composable functions) and Haskell. I find some parts of Haskell culture/styles to be quite silly however, and fewer parts of the lisp community to be silly. But that may just be because of Haskell’s greater popularity.

I find that the complex syntax of Haskell is something I'll never be able to remember. There's so much implied in the syntax. Lisp is so much more readable and thinkable-inable because there's almost no syntax. No other language is as easy to think in. But I've never studied math beyond pre-calc.

is haskell more popular now? what about clojure?

Perhaps greater memetic popularity?

Very much agree. When solving a problem with Haskell one first has to step back and think in terms of what algebra the underlying domain objects satisfy, then encode that in a combination of Haskell's type system + pure functions. It takes a lot of thought, but if done right, it feels like cracking an elegant proof.

Best and worst thing about programming to me: it can be applied. Pro: I get to make good money with my “art” of choice. Con: I have to create hideous monstrosities.

I'd hazard to say that the abstract algebra is more foundational, more of a bedrock on which the sophisticated cathedrals like complex analysis can be built.

Math with time (and space/memory)... You just found the difference between computer science and math. And the reason proof-of-work in blockchains is possible.

Regarding time... It's sometimes not easy to say what happens when in a Haskell program. Yes, monads give you a sequence, but it's usually not the one sequence of an imperative language.

Indeed. And while monads provide a certain kind of sequence, it's important to remember that what is being sequenced is just some data dependencies—later effects can depend on the result of earlier effect. The evaluation order can be completely different depending on the type of monad and how the data is actually used. For an extreme example, take the forward & backward state (or "Tardis") monad[1], or for that matter any monad that implements MonadFix (including IO). The fact that the code is written in some order using a monad does not imply that it gets evaluated in that same order.

[1] https://hackage.haskell.org/package/tardis-

"Dream-like" is a good adjective for Haskell programming. Sometimes when writing Haskell I just zone out and let my subconscious do whatever it wants, and the type system provides enough structure that this usually gets you working code. It's a very fun creative exercise.

This is interesting to read having worked with Clojure, but never Haskell or CL. I expected the Haskell examples to look alien and the CL to look familiar, but the idiomatic Clojure solutions to the examples are almost identical to the Haskell solutions. E.g.

    take 5 . filter (not . p) . drop 3

    (->> s (drop 3) (filter (complement p)) (take 5)) ; for some sequence s
I think it is also true of Clojure that it strives to have many small functions with a high degree of composability.

I use threading (->>/->) macros in Elisp too. Might not be "idiomatic" but to me malleability of LISPs has always been the biggest selling point. Although I don't see threading macros a lot, functional libraries are pretty popular in newer Elisp packages.

Now there's a different discussion to be had about laziness and immutability but for a lot of stuff it doesn't really come into play.

We have laziness in Clojure but don't think we have stream fusion. Yet the functional approach works just fine for most things and is preferred.

Even in JS which doesn't even have built-in laziness most new stuff uses a functional approach (underscore,lodash etc) wherever possible too.

CL is just unique in that it has a very long history and a huge standard library. They have many of these monolithic functions that other languages simply don't have built-in.

(filter (complement p)) is just (remove p), right?

Is there something similar to (->>) in Haskell?

Edit: corrected “—>”.

Assuming that (-->) is either a typo of Clojure's ->> or ->, or some CL macro that operates the same (I don't know any CL, admittedly), in Haskell there is the somewhat common & (as in, available in the standard library and the widely-used lens library, but not part of the prelude and a lot of folks seem to eschew it in favor of composing in "the other direction" with . and $), which is defined as `flip ($)` (i.e. reverse function application) and visually ends up working the same as ->>:

    & map someFunction
    & filter somePredicate
    & sum
Other ML-ish languages typically provide this operator but under a different name, |> being a common one. One important note is that the mechanics here are a bit different than threading macros in lisp, since you're explicitly building the AST you want and using operator precedence (and implicit/automatic currying) to get the visual effect, rather than calling a rewrite rule. This means you can't have a straightforward reproduction of Clojure's -> macro, although in practice this never really matters, and when it does you can still fake it with flip (or, more readably, parens and a lambda).

If you really want to, Haskell also has a few macro systems. They are just not used nearly as much as in Lisp.

This comes for free from Haskell's automatic partial application (that arises from Haskell's pervasive use of function currying). So several different Haskell variants of Clojure's `compose` correspond to different versions of Clojure's threading macros.

Clojure's threading macros arise because it's a little bit more awkward in Clojure to partially apply functions (either anonymous functions with # or use `partial`) and it reads more fluently to just use threading macros there.

However, Clojure can do fancier things with variants of the common threading macros (https://github.com/rplevy/swiss-arrows)

What's the syntax in Haskell for thread-first (-> in Clojure) which inserts the result of the previous operation as the first argument, versus thread-last (->>) which inserts it at the end, so is more amenable to composed curried functions?

Ah I was wrong! That's what I get for not coding in Haskell in a year and not checking my answer with GHC.

Function application always has higher precedence than infix operators so I can't emulate thread-first with functions in Haskell. If that wasn't the case, this would work (this was what I was thinking of)

    (|*>) :: b -> (a -> b -> c) -> (a -> c)
    (|*>) x f = (\y -> f y x)

    xs = 5
        |*> take [1, 2, 3]
Otherwise, I can come close with this.

    import Data.Function ((&))

    infix 9 *-
    (*-) :: (a -> b -> c) -> b -> (a -> c)
    (*-) f x = (\y -> f y x)

    xs :: [Int]
    xs = 5
        & take *- [1, 2, 3]
        & filter (/= 1)
        & head
        & take *- [1, 2, 3]

Using right-associativity of function types, and by sugaring the lambda into another function argument, we can rewrite this as

  (*-) :: (a -> b -> c) -> b -> a -> c
  (*-) f x y = f y x
This reveals that (*-) is simply flip, which is in the Prelude (re-exported from Data.Function). Alternatively, Hoogling the original type signature will reveal the same thing.

So the example doesn't require any new functions:

  import Data.Function ((&))

  xs :: [Int]
  xs =
      & flip take [1, 2, 3]
      & filter (/= 1)
      & head
      & flip take [1, 2, 3]
In general, it's common in Haskell to use flip, curry/uncurry, and other higher-level functions to manipulate functions so they fit into the needed context.

If you don't like flip, you could also do

  xs :: [Int]
  xs =
      & (`take` [1, 2, 3])
      & filter (/= 1)
      & head
      & (`take` [1, 2, 3])
But I'm not sure that's a better style than flip.

I really like `on` from Data.Function, too.

For maximum readability I'd be tempted to write the lambdas explicitly.

  xs =
      & (\n -> take n [1, 2, 3])
      & filter (/= 1)
      & head
      & (\n -> take n [1, 2, 3])

That might be a wise choice.

It depends on what you are used to and your tolerance thresholds.

In a code base I worked on professionally, I came across the following idiom

    f a b `flip` d
which at first used to confuse me to no end. Later I figured out that you are supposed to read it as:

    f a b <place-holder> d
and thus

    \c -> f a b c d
In the same vein that Scala uses an underscore in somewhat implicit anonymous function syntax.

I still wasn't really happy with that usage, but at least I see why someone thought it would make sense.

(Mostly I wasn't so happy because it only works for the penultimate argument. If the syntax had worked out so that

    f a `flip` c d
would mean

    \b -> f a b c d
I would have been more amenable to the idiom. But you need the ugly

   f a `flip` c `flip` d
for that case. And that's clearly worse than the lambda syntax.

Oh good lord, that's horrible!

I think the Prelude answer would be to use `flip`[1], along with composition (`.`) or application (`$`) as usual.

    flip :: (a -> b -> c) -> b -> a -> c 

[1]: https://hackage.haskell.org/package/base-

You can create it yourself:

    Prelude> s ->> f = f s
    Prelude> [1..100] ->> take 10 ->> filter (not . odd) ->> drop 3

I think `&` is this function. In Data.Function.

Or you can use >>> from Data.Function, if you prefer to go point-free.

I think there are some pipe-like operators in the popular Lens library (and presumably others as well). However, writing Haskell, you quickly get used to everything being backwards.

I find the Lens operators in Haskell just the most hilarious API in all of computing, even knowing that there is some sense to it.

I very much enjoyed a Kmett talk where he described needing to parameterize some lens thing with an index, a monad it would operate under, the source focus, the target focus, and three different containing structures, with some pieces repeated... and then realizing the type arguments said `i m a s t a b u` and decided he had probably gone too far.

What's interesting is that Ed Kmett isn't even a dyed in the wool Haskeller. He has more than a foot in the C++ world.

He's quite brilliant, but I still haven't really gotten used to the full power of lenses, yet. I only used them in simple ways.

While I have found the majority of my interactions with Haskell (and ML family languages) deeply satisfying, I have never really used Lens in anger, having been put off each time I've tried to get into it. Although, I remain slightly simpathetic since - as you say - there is some theory underlying it all.

I may be mistaken, but I think I have seen a handful of "Lens-lite" type libraries for use in production since it's such a large library, which seems.. uh.. suboptimal.

Not really. In the example, function composition, `.`, joins the functions together.

You can smash a list of functions together but they'd all have to take and return the same type, and the list would introduce commas and square brackets, so it wouldn't look quite the same.

You might be able to do some type trickery with heterogeneous lists? But I don't think it would be worth it.

`.` is function composition in Haskell (used in the example).

Interestingly function composition is also the mapping operator (fmap) for the corresponding functor.

When I was new to Haskell (and also not an experienced programmer), the following example on “functional style” had a huge positive impact on me -- where the same program is re-written in ten different ways, each time making it more modular & composable: http://yannesposito.com/Scratch/en/blog/Haskell-the-Hard-Way... (section 3.1... Feel free to ignore everything before/after that)

It feels so much easier to both understand and modify the evolved version of the code.

Thanks a lot, this is a link to hand to aspiring FP programmers for the content is well presented.

This one made me laugh out loud: "This is sometimes called parametric polymorphism. It’s also called having your cake and eating it too."

The whole blog seems fun to read and informative!

I love the "modifying version 1 is left as an exercise to the viewer."

And I'll definitely read more of that blog.

This seems to imply that extensive use of composition in Haskell is a notable stylistic decision rather than an entailed effect of the fundamental design choices (first-class functions, lazy evaluation, type classes) that explicitly encourage such composition and a terse, mathematical syntax that makes the composition operation obvious.

Heavy use of Composition is idiomatic in lisps, too. It’s just the syntax doesn’t rub your face in it: It’s pretty common to see form in lisp that ends in a dozen close-parentheses as we see chains of forms being passed as arguments to other forms. It would be weird if composition wasn’t idiomatic in a language with first-class functions.

The loop macro isn’t good evidence: it’s notable precisely because of its difference from typical lisp usage. I’ve heard lisp programmers claim to refuse to use loop because it clashes with the rest of the language. I’m less interested in purity, and it is so convenient sometimes...if only as a clear illustration of just how much power and potential for abuse the CL macro system provides. Amd let’s just leave CLOS and metaobject protocols under the tarp for now :-)

> It’s pretty common to see form in lisp that ends in a dozen close-parentheses as we see chains of forms being passed as arguments to other forms.

That's just a chain of function applications.

Function composition puts the emphasis on the fact that functions are values in their own right and can be manipulated (eg with a composition operator) as objects without regard to any values they might act on.

If you want to see one Haskell equivalent of loop, https://hackage.haskell.org/package/monad-loops-0.4.3/docs/C... has got you covered.

I’m not convinced that chain application and composition are meaningly distinct in anything but a syntactic sense if the language in question offers first-class functions and deterministic evaluation (eg no side effects). In mathematics, composition ($f \circ g$) is invariably described in terms of chaining f(g())—in fact, the composition operator is usually defined as alias for chaining. Yes, there are subtleties I’m glossing, but I don’t think support the thesis that function chaining and composition are significantly different at this level (i.e. the level of syntax and idiom rather than the level of Category Theory).

Mostly agreed.

John Backus seemed to think it was a big deal. See eg 'Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs ' (http://www.ict.nsc.ru/xmlui/bitstream/handle/ICT/1256/1977-b...)

On the one hand, I do agree that in a language like Haskell it's almost entirely a syntax level distinction.

On the other hand, syntax level conveniences and inconveniences can have a big impact on how people use a language. (Eg in Haskell if-then-else is more hassle to type out than pattern matching. A slight nudge by the language design to make users prefer the latter.)

Once you have acquired the ability to see functions as objects in their own right, many more techniques become available. Think of eg parser combinators.

Technically you don't need the syntactic convenience to use parser combinators. But I am not sure they are worth the hassle without those conveniences. Eg Python has enough functional machinery to support parser combinators, but its options for defining functions (def and lambda) are just so cumbersome.

And thanks for the link to the loop monad. I didn’t know it or forgot. I don’t know whether to be delighted or appalled.

Glad to be of service.

(Slight pedantry: I wouldn't call it the 'loop monad' in the same way as 'list monad' or 'IO monad'. The monad loop library provides looping combinators you can use with many monads and some you can use with all monads.)

The LOOP macro is actually a very different approach: it's an embedded domain specific language for iteration tasks. The language is thought to be more natural (-> 'conversational') than formal -> an actual influence from Interlisp, where this macro originally came from.

If we talk about the approach of using larger building blocks with named parameters, that's also typical in UNIX, since most UNIX commands have zillions of named arguments, even more so than typical Common Lisp functions. Typically it is easier in using interactive command languages to fill out common options to a command, than to write programs.

For example 'ls' does not expose a bunch of functions to combine, but is a powerful command with lots of named options:


Named parameters/arguments are not generally common in Lisp. Emacs Lisp did not have them and wanted to avoid them explicitly.

Common Lisp doesn’t just have LOOP, it also has Series [1]. I think the actual philosophical difference may be that Haskell does this with compiler support for stream function composition, whereas Common Lisp does it with a set of language-level macros any user could write (if they were as smart as Richard Waters).

[1] https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node347.html

Stream fusion in Haskell is done with rewrite rules. On the one hand, it's totally true that you're no longer really writing Haskell, and the facility for rewrite rules itself needed (at some point) to be added to the compiler. But at this point I can make my own data structures fuse (potentially in new ways) without further changes to the compiler.


To get a feel for how these compare, the Common Lisp Cookbook gives some examples for how to do different common operations using the loop, series, and iterate packages: https://lispcookbook.github.io/cl-cookbook/iteration.html

Strictly speaking, Common Lisp does not have series because they were not included in ANSI Common Lisp. Therefore even though you could use the macros provided with a separate streams package, you’d need to make sure that all your code used that package (rather than the CL definitions of let and lambda and so on) and properly declared stream functions. This ends up making code using streams not super composable which isn’t great.

I also think it’s a bit unfair to say that Haskell’s stream fusion requires special compiler support. The compiler doesn’t recognise list functions specifically or have special cases for list-looking-types (except for syntax). Instead there is a general mechanism to say “at stage number x, replace expressions that look like y with z”, where there are something like 10 stages and expressions y and z are type checked so these transformations are only applied in “valid” cases. I think saying that this is special compiler support for stream fusion is like saying that macros are special compiler support for series in CL.

A final thing to note is that because of lazy evaluation, stream fusion in Haskell doesn’t change the semantics of the Haskell list functions from how they would behave without it. In Common Lisp streams must be explicitly separated from lists because they are semantically different (in the evaluation order of the functions that act on them).

GHC does have special support for fusion and rewriting. See e.g. [1]. The laziness comes with the language, but turning it efficient requires a series of these rewrite rules expressed as GHC-specific metadata.

[1] https://markkarpov.com/tutorial/ghc-optimization-and-fusion....

I was referring to the RULES pragma. My argument was that it is a general mechanism and not a built in understanding of list functions in the compiler. I suppose I ought not conflate the with Haskell but approximately everyone uses ghc.

GHC is the most complicated compiler of any language (or very near to it), except perhaps Mathematica which is similar:

It's an optimizer for a graph-reduction engine.

It doesn't mean much to call out "special support" for any one bit of style of programming, since the language itself has nearly no concept of performance -- you just write functional code and the compiler will find the fastest runtime computation of the evaluation that it can.

> It doesn't mean much to call out "special support" for any one bit of style of programming, since the language itself has nearly no concept of performance -- you just write functional code and the compiler will find the fastest runtime computation of the evaluation that it can.

While GHC is glorious, it's still very limited in what it can do, performance wise.

You are right that languages themselves seldom have a direct concept of performance. But language features and restrictions have a big impact on achievable performance. Eg Haskell's purity-by-default means that the compiler has quite a bit of freedom when choosing how to translate something like eg the 'map' or 'filter' functions.

The equivalent in C++ gives the compiler less flexibility, because the helper function handed over at runtime might have side-effects.

Similarly, Haskell functions are still allowed one important side effect: non-determination. In a more restricted language like Agda that's not allowed. Thus giving the compiler more degrees of freedom to work with.

Or in a different direction: the query optimizers for SQL can do very impressive work, because SQL is so limited.

My impression was some of those expressions were added to the compiler specifically to make stream fusion work well enough to use. Even if so, it’s a good point that laziness makes streams more “natural” in Haskell so that was just a performance improvement, not a semantic change. Certainly laziness is a big “philosophical difference” between the two!

Yes. And laziness works because Haskell is pure-by-default.

Yes. Though if we were starting from scratch today, perhaps we would see a total language, ie one where all functions terminate.

That would allow the compiler much more freedom in how to order evaluation.

Its a great article, unfortunately it barely mentions Lisp's function composition capabilities and thats a bit unfortunate because it makes I guess a nice case for differences in API design in Haskells standard library and Common Lisp's.

Lisp did in fact pioneer the introduction of higher-order functions and function composition into programming language and its fairly remarkable that lisps are still around and highly versatile / usable.

What's being shown are Collection Pipelines(https://www.martinfowler.com/articles/collection-pipeline/) which can be implemented in lisp and in fact are. Clojure and Scheme for sure have these procedures and I am sure they are available for common lisp as well.

Whether one prefers Haskell or Lisp is probably in parts a subjective decision (and also has to do with socialisation). I think I am falling onto the lisp side of things because of the explorative nature of programming and I had my heureka moment of programming with Test-Driven Development and for some reason I am gravitating to that in my programming and its easier to do TDD without a type-checker actually.

For anyone who is curious about exploring Lisps, I suggest having a look at Clojure or Common Lisp (potentially Dylan if you don't want to bother with parentheses) and not Scheme (because it doesn't have polymorphism and is heavily fragmented).

Uh guys?

    [1]> (remove-if-not #'evenp `(1 2 3 4 5 6 7 8 9 10) :count 3 :start 1)
    (1 2 4 6 8 9 10)
As pointed out by owl57, the Haskell translation is incorrect and the correct implementation isn't quite so trivial. I don't see a way to pull the :count argument out into a separate function. We certainly can for skip, though, and I might.

    Prelude> skipThen 1 (removeIfNot even 3) [2,3,4,5,6,7,8,9]
as implemented:

    removeIfNot p count = go count
        go _ [] = []
        go 0 list = list
        go n (x:xs) | p x = x:go n xs
                    | otherwise = go (n - 1) xs

    skipThen count f list =
        let (pre,suf) = split count list
         in pre ++ f suf

You could decompose removeIfNot a bit further by representing the intermediate stages as pairs of lists (result + unprocessed input):

    import Data.Bifunctor (bimap, first)

    type Split a = ([a], [a])
    unsplit :: Split a -> [a]
    unsplit = uncurry (++)

    splitStart :: [a] -> Split a
    splitStart xs = ([], xs)

    skip :: Int -> Split a -> Split a
    skip n (xs, ys) = first (xs ++) (splitAt n ys)

    iterateN :: Int -> (a -> a) -> a -> a
    iterateN n f = (!! n) . iterate f

    removeOneIfNot :: (a -> Bool) -> Split a -> Split a
    removeOneIfNot p (xs, ys) = bimap (xs ++) (drop 1) (span p ys)
    GHCI> unsplit . iterateN 3 (removeOneIfNot even) . skip 5 . splitStart $ [1..15]

Very nice! I had a bit of trouble convincing myself that it has the laziness I'd want out of it, but by experimentation it seems to.

Still, definitely a more complicated decomposition than that described in the article (but maybe a more interesting article!)

With one more combinator we can decompose this slightly further:

    stepSplit :: ([a] -> Split a) -> Split a -> Split a
    stepSplit f = uncurry first . bimap (++) f
    skip n = stepSplit (splitAt n)
    removeOneIfNot p = stepSplit (second (drop 1) . span p)
Or in perhaps-more-familiar monadic terms:

    import Control.Monad (replicateM_)
    import Control.Monad.Trans (lift)
    import Control.Monad.Trans.State (StateT, evalStateT, state, get)
    import Control.Monad.Trans.Writer (WriterT, execWriterT, tell)
    import Data.Bifunctor (second)
    import Data.Functor.Identity (Identity, runIdentity)

    type SplitterT b m = WriterT [b] (StateT [b] m)
    type Splitter b = SplitterT b Identity

    splitter :: Monad m => ([b] -> ([b], [b])) -> SplitterT b m ()
    splitter f = lift (state f) >>= tell

    execSplitterT :: Monad m => SplitterT b m a -> [b] -> m [b]
    execSplitterT m = evalStateT (execWriterT (m >> lift get >>= tell))

    execSplitter :: Splitter b a -> [b] -> [b]
    execSplitter m = runIdentity . execSplitterT m

    skip :: Monad m => Int -> SplitterT b m ()
    skip n = splitter (splitAt n)

    removeOneIfNot :: Monad m => (b -> Bool) -> SplitterT b m ()
    removeOneIfNot p = splitter (second (drop 1) . span p)

    GHCI> execSplitter (skip 5 >> replicateM_ 3 (removeOneIfNot even)) [1..15]
It's hard to say which version is clearer. It probably depends on what you're used to. The code is essentially the same under the Writer/State abstraction, with `execSplitter` replacing both `unsplit` and `splitStart` and monadic bind in place of function composition. The `Splitter b ()` type is isomorphic to the `[b] -> ([b], [b])` used for the argument to `stepSplit` in the first version.

"Whatever happened to the Popular Front, Reg?"

TXR Lisp with unpublished reject function (complementing select):

  (1> (let ((r (range 1 10)))
        (reject r (take 3 (drop 1 [where oddp r]))))
  (1 2 4 6 8 9 10)
This is not correct because "drop 1" means "drop the first index where r was found odd" not "drop the first index if it happens to be zero".


  2> (let ((r (range 1 10)))
       (reject r (take 3 [drop-while zerop [where oddp r]])))
  (1 2 4 6 8 9 10)
"Let R be a sequence of integers, indexed from 0. Determine the indices where R is odd, as a sequence of indices in ascending order. Remove the zero index, if it is present. Then take the first three of the remaining indices. Remove the elements with those indices from R."

This seems verbose, but watch it do something Common Lisp's remove-if will choke on:

  2> (take 50 (let ((r (range 1 1000000000000)))
        (reject r (take 3 [drop-while zerop [where oddp r]]))))
  (1 2 4 6 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
   27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
   48 49 50 51 52 53)
range generates a lazy list. where generates the indices lazily. drop-while and take are also lazy, and I just coded reject so that, after the indices have been exhausted, if the remainder of the list is lazy, then it just tacks that lazy suffix into the output. I.e. it's not fully lazy, but in this case it works because the list of indices was trimmed to a finite length.

I think in release 242, I will might include reject, as a fully lazy version supporting a lazy, infinite list of indices as well as sequence; and will fix select to also be fully lazy.

Thank you, I've mailed author, he updated the post

> UPDATE 2020-08-03: I no longer stand by the content in this post. I think the overall sentiment is marginally accurate; however, the details in the post are incorrect (as many have pointed out over the years).

> As has been pointed out, remove-if-not’s start/count parameters behave differently and cannot easily be separated out of the function, a design trade-off that I appreciate.

You’re lisp syntax is weird but you’re right:

The start argument specified where to start removing things and the end or count arguments specify where to stop removing things (ie at an index or after a certain number of elements respectively).

I claim this is a pretty good argument against the keyword arguments as they don’t necessarily do what one expects.

Honestly, I think the mistake comes from thinking of `remove-if(-not)' as `filter'.

When I see code saying (remove-if #'condition '(1 2 3 4 5 6) :start 5 :count 3), I would be pretty surprised if it removed anything prior to the 5th element. :count is worse, just by looking at this code I would not be sure whether it would always remove 3 elements (at most) or if it would remove all elements within a subsequence of 3 elements starting at index 5.

A small note: composition is not better than monolithism. The article makes it feel like it is but it is not. They are two perfectly valid approaches when it comes to solving problems. Often a mix of both is used.

Monoliths tend to be better at solving common real life problems efficiently while composition is better suited for unexpected abstract problems. We need both.

For example GNU binutils are not very "unixy", with plenty of "monolithic" options but they still work well with pipelines.

I did find it ironic that composition was called UNIX philosophy, when most UNIX tools are complicated functions with many options much more akin to the lisp function examples. (Of course it's the "UNIX pipes" part of the philosophy that's being alluded to, but it's still a funny twist)

Yeah and the US is a democracy, and Christians are Christlike. We don't always live up to our ideals.

cat -v

I disagree. With composition we have general building blocks which can be combined in an infinity of ways, and remain available to combine with newly added functions. The monolith way provides a fixed, non upgradable set of possibilities, so if your need does not fall into what already exists you need to add yet another monolith (see remove-if vs remove-if-not).

Change the Haskell example to read 3 and 5 as n and m from an external file. All of a sudden your beautiful code is now a huge mess and you need to refactor the one liner into at least three separate functions to try and keep the Maybe types from taking everything over.

Meanwhile in lisp I would have written a special drop3-take5 function in the first place. Then refactoring it would just take replacing 3 with n and 5 with m, adding n and m as arguments and writing a function to read whatever the config file is.

Let's say getting those n and m values has a nasty type like `getNM :: IO (Maybe (Int, Int))`. All you need to do is map twice when using the original function.

  foo (n, m) = take n . filter p . drop m

  bar = fmap (fmap foo) getNM
It's a little awkward, but no major refactor. A small price to pay for the nice types we get. I'd argue that if other languages make dealing with IO and optional values easier, its because they don't bother restricting IO (not necessarily criticizing that), or they don't bother checking completeness over optionals (i.e. a stray `null` can show up anywhere and derail the program, which I do think is a bad thing). Overall, Haskell provides many powerful tools for composing complex types, such as the functor/applicative/monad classes.

keep the Maybe types from taking everything over.

Pattern matching on Maybe types is rather unidiomatic Haskell. Much better to take advantage of the type classes to which Maybe belongs: Functor, Applicative, Monad. In particular, there's no need to unwrap/wrap the Maybe value when you have `fmap`.

Isn't that the exact same thing you would do in Haskell? How you read from a config file in Haskell seems irrelevant here, and troubles with it are more related to purity than composition.

  dropNTakeM p n m = take m . filter p . drop n

I mean the -if vs -if-not is really more for typing convenience, the current CL standard actually recommends against -if-not functions because you can do composition `(remove-if (complement #’predicate) sequence)`

I was a bit puzzled. Clojure seems to provide both avenues.

The "Common Lisp" version:

  (for [i (range 1 10)
        :when (even? i)
        :while (< i 5)]
A Haskell-y version (according to the article):

  (->> (range 1 10)
       (take-while #(< % 5))
       (filter even?))
Using Clojure's transducers:

  (into [] (comp
            (take-while #(< % 5))
            (filter even?))
        (range 1 10))

There's a more fundamental philisophical difference:

Common Lisp promotes the use of doclines and apropos tools in order to discover and understand behaviour, with symbol names that tend to describe the function of what they represent.

Haskell leans on type signatures and symbolic operators.

IMHO, a pseudo-random sequence of non-word symbols tied to a type signature is of very little use.

IE, something fabricated:

    (-*/) :: A -> M c -> B
Ah yes, so understandable.

People who write Haskell don't just leave their symbolic operators undocumented. It's definitely heavy on the symbols but that and type signatures are certainly not the only tools Haskell devs have.

On the one hand, the parent's comment is certainly hyperbole.

On the other hand, it's also the case that too any Haskell projects are under-documented. Probably worse than the typical language, and certainly worse than the best-in-class.

On the gripping hand... while types make poor documentation, they are always correct documentation, and they are automatically generated documentation. When I write Python or (apparently) Common Lisp, I can expect to get reasonable documentation by interrogating anything someone hands me. In Haskell, insofar as I have learned to get good information from the types, I get some (correct!) documentation for things that I have newly assembled myself!

I find type information for functions incredibly useful. I get so lost in Python code once it get past a certain size even after years of Python programming experience because of the its lack of ability to put a useful amount of structure into the code. After having programmed in Haskell for a while, I just miss tabbing a function and being able to see its type signature.

That, and more, is available in common lisp.

    * (describe #'+)
    #<FUNCTION +>
      [compiled function]
    Lambda-list: (&REST SB-KERNEL::NUMBERS)

      Return the sum of its arguments. With no args, returns 0.

    Known attributes: foldable, flushable, unsafely-flushable, movable, commutative
Docstrings and type information can (optionally, but recommended) be added to all code. Users of your API can use documentation/describe to discover information about it without ever having to look at the implementation.

> IMHO, a pseudo-random sequence of non-word symbols tied to a type signature is of very little use.

Having only ever dabbled in Haskell, I feel the same way, but I'm aware that proficient Haskell programmers don't tend to agree. How do you rate your Haskell skills?

I've heard the same said of another 'weird and wonderful' language: Forth. Yes, it's incomprehensible if you aren't familiar with the language, but that's not really who the language is for.

I wrote a fair bit of Haskell about a decade ago; I'd rate myself a novice still, but I'm confident that I could knock out some games in a game jam using Haskell.

I'm surprised no one has plugged Hoogle. It was the hands-down best resource to make Haskell reasonable for me. But even still, I found code that I wrote in Haskell not unlike the plentiful amount of code I wrote in Perl: returning to it after the fact was tedious.

> (-/) :: A -> M c -> B

let's try to assign some meaning to this. From the get go, you know that it's a function that turns an A into a B, and it only has the information that is derivable from `M c`.

So, yes this is gibberish. But if there's a slight modification to this:

    (-*/) :: A -> M c -> c
then it makes a lot of sense. Signatures are a much shorter, terser way to communicate than words (and nesting structures that LISP uses) imho.

It was meant to be pithy jibberish, yes; I tried to pick three characters in an operator that I was semi-confident that I hadn't seen while writing Haskell, and a garbage signature.

But even with your alteration, I'm not sure what it does. What interaction is there between A and M c? Does it mutate c, or just unwrap it?

It can't mutate `c` because it doesn't know what `c` is. There might not even be a value to mutate; `c` could be Void, or a phantom type parameter. But given a definition for `M c` like:

    data M c = M (A -> c)
there would only be one reasonable implementation:

    a -*/ (M f) = f a
If `M c` does not contain a function from `A` to `c` then the only possible result is non-termination. You weren't given a value of type `c`, and you have no way of producing one from the inputs, so you can't return such a value, and the only alternative is not returning at all.

Of course the real problem here is that (at least without context) this operator name doesn't mean much. You could make the same complaint about a perfectly ordinary function named `ixfni`. The only real difference in Haskell is that symbolic operators is written infix by default, while named functions are written prefix, but it's very easy to flip that around:

    (-*/) a b
    a `ixfni` b
Note that you can even define fixity and precedence for named functions in their infix forms. Unlike most other languages with user-defined operators there is remarkably little pressure to rely on symbols in situations where they would be unclear. Symbolic operators should only be defined in situations where the symbols are reasonably well-known (within some context) or "intuitively" related to existing operators.

I think it should be stressed that most languages (except maybe bash I guess) can’t provide the kind of compositionality that Haskell does.

This is because Haskell’s lazy evaluation allows these compositions to do an asymptotically appropriate amount of work. Consider this Haskell:

  xs = [1..20]
  p n = if n < 13 then even n else p n
  ys = take 5 . filter p . drop 3 $ xs
  main = print ys
If a strict language were being used then the processing would be as follows:

1. drop 3 ys is [4..20] 2. When computing filter p [4..20], everything is ok up to 12, but computing p 13 goes to an infinite loop. 3. We never get a result to compute take 5 of.

Because Haskell is lazy, it will not try to compute p 13 and so can do the right amount of work.

Perhaps a more reasonable example would be that if the list xs were really long, a strict program would have to evaluate p against every element before taking the first 5 results (so it would take linear time if p were constant time) while a lazy program would only need to compute p on enough elements to collect 5 results.

Common Lisp was designed for older, slower computers than Haskell and to be able to make programs which were reasonably efficient. It also had strict evaluation and so with this context, the extra arguments to many functions for special cases (which lots of people didn’t like) or the combined iteration of loop (which lots of people didn’t like) make a lot more sense.

Since bash and the Unix core utils are written in C, I strongly challenge your claim that "most languages" can't do stream processing. You don't have to have a lazy language to implement a lazy data type.

The difference is between pervasive laziness (laziness by default) v/s laziness by opt-in.

If you have laziness by opt-in, then most parts of the language will not opt-in [from prior experience with, say, racket which also has lazy streams].

It's the pervasive and efficient compilation of laziness in Haskell which makes it actually useful to write lazy programs.

Now, am I a fan of writing lazy programs? No, I find the upshot of referential transparency gained by laziness to be overshadowed by the warts it brings --- lack of a good debugging experience, tie-the-knot semantics being fragile, etc. I'd rather live in a strict total programming language.

But one cannot "handwave" away laziness in haskell that easily by saying "you can emulate it in $STRICT_LANGUAGE". By the exact symmetric argument, one can emulate strict code inside haskell: just use `ST` or `IO`. But very rarely does one see whole libraries written strictly, because this style of code doesn't compose with the (default) laziness and purity in haskell.

The main difference is that in lispy languages you wouldn't use the language itself to solve the problem. You'd extend it. I'd write a function that is (drop3-take5 test list) which does exactly what it says.

This might seem like huge overkill, but when the specs change and I need to set the start and stop from a config file I can easily refactor drop3-take5 into (drop-n-take-m n m test list), with a let around it to read the config.

drop3-take5 would look something like:

    (define drop3-take5-list
      (lambda (ls filter)
        (define internal
          (lambda (ls matches)
             ((null? ls) '())
              (if (filter (car ls))
                   ((< matches 3) (internal (cdr ls) (1+ matches)))
                   ((<= matches 5) (cons (car ls) (internal (cdr ls) (1+ matches))))
                   (else '()))
                  (internal (cdr ls) matches))))))
        (internal ls 0)))

This seems like an unsustainable approach: will you need to write these bespoke functions for any combination of behaviors you would need?

Lisp is about writing a dsl that solves your problem. It just happens to be an s-expression dsl. Exactly how it works depends on the problem domain. But in my experience a lisp dsl is much less brittle than trying to map the problem into a haskell program, especially with a changing spec.

This is a bad example because it is so contrived to show off the strengths of haskell.

As an example for the strengths of lisp: I wrote a logical dsl that adds a boolean dependent type system to scheme using macros. It's 400 lines long and copies all the bits from Idris I needed and it still feels like a scheme and is fully interoperable with regular scheme.

Ok, now it fails - how are you going to interpret the stack trace?

Not sure what you mean, it depends on the implementation. The cheapest cop out answer would be: use racket and step through until you reach the point where it fails. The IDE even gives you a nice handy list of the current continuation.

I can trace it well less well in guile which is the usual scheme I use for daily projects but it's not something I've had huge trouble with on medium sized projects >10,000 lines.

That's a helpful answer. Apologies if it seemed confrontational.

The remove-if-not example is pretty funny. No, it doesn't have any obvious translation into function composition, and in particular isn't equivalent to the Haskell example. Instead, :count specifies how many elements to remove and :start specifies when to start removing.

I wonder if the fat macro eDSL wasn't just a temporary trend in lisp. Some lispers like small and composable.. after all that was part of the reason behind scheme. picolisp is also very combinatorics in mindset. Clojure likes that too (even though it came after the fp revival).

Deeper philosophical differences:

* Lisp is multi-paradigm languages. Haskell is pure functional language.

* Haskell wants to be mathematics. Lisp wants to be an operating system.

I believe the composition vs. monolithism difference comes from these deeper differences.

> Lisp wants to be an operating system.

Lisp wants to be interactively used.

I dislike the lisp convention of remove-if, remove-if-not, etc. Filter and take isn't much better. The python convention is much nicer:

  [x for x in foo if x not in y]
Turns out, this can be expressed in Lisp:

  (list x for x in foo if x not in y)
I've implemented this into my fork of Lumen that runs on Python. https://github.com/shawwn/pymen It feels very nice to use:

  > (list x for x in (range 10) if (= (% x 2) 0))

  Built-in mutable sequence.

  If no argument is given, the constructor creates a new empty list.
  The argument must be an iterable if specified.
  [0, 2, 4, 6, 8]
It wasn't even that ugly to implement. It's ~20 lines: https://github.com/shawwn/pymen/blob/1c191c9e00e73303a6479f8...

One other neat thing is that the REPL prints out the docstring of the last evaluated thing by default. It makes it nice to explore libraries:

  > (import numpy as np)

  > np.vsplit

It's like an automatic help() call on the last evaluated thing.

(If anyone happens to try it out, you can get into a REPL by running `rlwrap bin/pymen`, or just `bin/pymen`.)

It is also ~20 lines in CL http://lisp-univ-etc.blogspot.com/2013/01/real-list-comprehe...

But list comprehensions suffer from the same problem the author is taking about, they don't compose. Its even worse than CL remove-if variant, where one can reify the filter into a function that can then compose.

For those that didn't know, Haskell does of course have list comprehensions:

    [x| x <- foo, not (elem x y)]
Here's an example finding consonants:

    [c| x <- ['a'..'z'], not (elem x) $ "aeiou"]

No need for the dollar sign in the second example :)

A nice thing about Haskell list comprehensions is they're based on a trivial transformation to monad syntax, which makes it easier to factor out complex list transformations into multiple little bites. I've run into this problem in Python. Also, Python list comprehensions can often behave very unexpectedly due to mutability.

Here's an example of the monad syntax:

    [(x,y) | x <- [1..10], y <- [1..x], odd (x + y)]

      x <- [1..10]
      y <- [1..x]
      guard $ odd (x + y)
      return (x,y)
Lots of combinatorics problems can be elegantly solved using the list monad like that.

Example where Python list comprehensions behave very unexpectedly...?

Haskell's list comprehensions actually started off as more general monad comprehensions, were later restricted to lists, and are back behind a GHC language pragma.

Huh, the syntax can be almost directly spliced into Common Lisp's loop:

  CL-USER> (let ((foo '(1 2 3)) (y '(2 4)))
             (loop for x in foo unless (member x y :test #'=) collect x))

  ;; -> (1 3)
It wouldn't be hard to write a macro that expands your expressions to CL's loop, with a bit of magic to parse the "if ... [not] in ...".

Even though it's more "Pythonic", I tend to avoid your Python version. I think my favorite way to write that would be:

> import whatever > filter(_ not in y, foo)

For me it's more readable. To each their own, of course.

I've noticed that these differing approaches lead to argument order being flipped sometimes.


Because Chicken's `string-split` expects its delimiter to be optional, it wants a string first.


While Haskell's emphasis on composition means it's more logical for the constant parameter—the delimiter—to be provided first.

It's not a great example, but the first one I can think of. The difference is starker when the Lisp command has many more parameters than the Haskell one, with some of them being optional. When trying to write cute point-free code in Scheme, I found myself using `flip` too much for it to be fruitful.

I hate when I have to do flip. Dislike reading it even more. I don't know what the solution is, other than using points..

I am less experienced than OP, but I got the impression the philosophical difference is that with Lisp, you are trying to abstract things with a simple language and AST code generation (macros) whereas in GHC Haskell you are trying to abstract things with a very in depth language, heavily based on mathematical ideas (see all those language extensions!).

Yes. Though in theory you could use Haskell with a Lisp-like syntax just fine.

Many of the language extensions are sugar over a simpler (implied) core of the language. Some are purely syntactic, like NumericUnderscores. Some others like GADTs are semantic, but still mostly explained in terms of mapping to this simpler language.

> One difference in philosophy of Lisp and Haskell is that the latter makes liberal use of many tiny functions that do one single task. This is known as composability, or the UNIX philosophy.

Have you ever looked at a Unix man page?

And while Common Lisp has some functions that might be swiss army knives of functionality, that is not "the philosophy of Lisp", it's a particular style. Clojure and Scheme are just as much Lisp as CL, and as other people have noted, their style is very similar to Haskell.

When I started out with Python, I was learning Haskell in parallel. As a result, my early Python code ended up being influenced by Haskell (I wrote many small functions that performed a single task). But it was somewhat frustrating. Later, I developed my Python style and began heavily relying on optional arguments through `kwargs` and duck typing. That made my programs much more effective and easier to reason about. Now I realize that my later Python code became more Lisp-like.

Reminds of the difference between Unix and GNU. I wonder to what extent RMS's Lisp background influenced GNU's kitchen sink tendencies.

I really enjoy programming (a lot!) in both Common Lisp and Haskell, and I agree with his comments on both languages.

Chris and I probably differ in that I love the kitchen sink built in functions. Also, it is also common to write lots of very short Common Lisp functions.

Any particular reason why Lisp doesn't also tend to compose with smaller functions? Did it just happen to evolve that way? Is it easier to get the types of the smaller functions right when you have a compiler to help you like you do in Haskell?

Clojure takes that approach

How do Clojure transducers compare to Haskell's stream fusion in 2020?

For one Lisp is not lazy like Haskell.

The article elaborates on this point a bit:

> Like pipes in UNIX, the functions are clever enough to be performant when composed together–we don’t traverse the whole list and generate a new list each time, each item is generated on demand. In fact, due to stream fusion, the code will be compiled into one fast loop.

I imagine it is hard to do the same optimization in a non-lazy language.

It's interesting that he says you also need purity to do the optimization, but Unix pipes are not necessarily pure.

Unix pipes don't do stream fusion, which is the optimisation to which he was referring.

So if I put (x1, x2...) through pipes f and g, instead of calculating all the f(x)s first followed by all the g(f(x))s, it calculates g(f(x1)), then g(f(x2)), etc.

How is that order of operations different to stream fusion? Why does stream fusion require more purity than Unix pipes?

Let's say there's a global mutable variable M (doesn't have to global, can be x1.M = M, x2.M = M, for example, but easier to think with global) which is used by and modified when f(x) or g(y) runs. Then, the order of operations would matter and so you get different results with and without stream fusion.

Right, but couldn't the same thing happen with Unix pipes if the components write to a global file?

You can do the same optimization Unix pipes does without language enforced purity, just with a caveat that functions sharing state might behave weirdly when fused.

You could... I guess language designers generally avoid those types of optimizations since it can bring about unexpected results. As in, if syntactically the code is using one order of operations and semantically the code is using another order of operations, even if it's documented, lots of programmers would stumble into difficult to find bugs and be surprised by this behavior. Or worse, just get wrong results and never even find out it's wrong.

I don’t understand why the above got down-voted... Without fusing (lazy) loops, having a bunch of independent sequential transformations (each with its own loop) can make the same computation significantly slower!

Ruby or Scala don't have lazy evaluation by default, yet there are views (Scala) or lazy (Ruby), so no, you don't necessary need compiler help.

Yeah, you can emulate laziness at the library level with iterators, coroutines and whatnot, but it always comes with a certain API friction (some readers might be familiar with the “colored functions” analogy [1]) compared to Haskell’s approach that takes laziness to the extreme.

[1] http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...

I was really hoping that article was going to start talking about effect systems. They're so cool! They make it possible to have many different function colors without it being a huge chore. They can be thought of as a more-ergonomic generalization of monads.

I'm sure you (OP) are aware, but just in case others aren't: Clojure stands out as a Lisp that embraces laziness.

Not super familiar either language, how does lazy evaluation promote the use of smaller functions?

In non-lazy languages you would need to implement laziness to manage composing functions and have good performance. An example being take 5 . filter ... If evaluation wasn't lazy you would run take 5, return the result to filter and so on.

In the languages I use (Scala, Ruby) it's already done, Rust is using iterators which are lazy too. None of those languages are "lazy" in Haskell sense.

Impurity plays poorly with function composition. cf "value restriction".

> One difference in philosophy of Lisp (e.g. Common Lisp, Emacs Lisp) and Haskell is that the latter makes liberal use of many tiny functions that do one single task. This is known as composability, or the UNIX philosophy.

This honestly seems like a misunderstanding which ignores how many POSIX commands actually accept commands. The uni philosophy doesn’t mean literally one thing, but rather one task; sometimes the task will have different parameters and that’s OK, but we’re not going to have `ls` takeout the trash.

> One difference in philosophy of Lisp (e.g. Common Lisp, Emacs Lisp) and Haskell is that the latter makes liberal use of many tiny functions that do one single task. This is known as composability, or the UNIX philosophy.

I would rather say that each tiny function handles one factor and that a program is the decomposition into factors. Lisp also composes functions but are chunkier and can be ad-hoc one-offs. Static types vs runtime errors is another point of difference.

Javascript and Python have similar capabilities, but they're missing the stream fusion optimization mentioned at https://chrisdone.com/posts/stream-composability/


  a = [...Array(20).keys()]
  p = x=>!(x%2);

   a = range(20)
   p = lambda x: not x%2
   filter(p, a[3:])[:3]

The code is not equivalent to Common Lisp (neither is Haskell version), it should remove `count` of elements from the `start` matching a predicate

    (remove-if-not #'evenp '(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15) :count 5 :start 3)
    (1 2 3 4 6 8 10 12 14 15)

    a = (1..15)
    r = []
    c = []
    a.each.with_index do |e, i|
      if i <= 3
        r << e 
      elsif c.length < 5
        if e.even?
          r << e
          c << e
        r << e
    #=> [1, 2, 3, 4, 6, 8, 10, 12, 14, 15]
I can see no simple way to compose this

    a = (1..15)
    l = a.take(3)

    c = 0
    m = a.drop(3).map.with_index { |e, i| [e, i, e.even?] }.take_while { |e, i, p| c +=1 unless p; c <= 5 }
    _, last_m_index, _ = m.last
    m = m.filter { |e, i, p| p }.map { |e, i, p| e }

    r = a.drop(3 + last_m_index + 1)
    l + m + r
    #=> [1, 2, 3, 4, 6, 8, 10, 12, 14, 15]

in Python 3 both range and filter are lazy

but the slices here would spoil it

there is https://docs.python.org/3/library/itertools.html#itertools.i... islice however...

You can also do this nicely in Linq in C#.

> Having written my fair share of non-trivial Emacs Lisp (and a small share of Common Lisp; I’ve maintained Common Lisp systems) and my fair share of non-trivial Haskell I think I’m in a position to judge.

While I prefer Haskell, judging Lisps by Emacs Lisp isn't entirely fair. It's probably the worst Lisp in somewhat wide use.

('newLisp' is worse, of course. But thankfully no one in their right mind uses it.)

The main difference is that Haskell is a very specific language, whereas Lisp is a family. Not all the members of the family are based on exactly the same philosophy.

The remove-if-not function just a Common Lisp thing.

And, by the way, at the bottom of its description in the standard, we find this:

The functions delete-if-not and remove-if-not are deprecated.

That deprecation is meaningless, since there is no standard body to revise the standard, and users are perfectly willing to continue to use those functions.

I tend to think that most natural languages use the same kitchen sink approach like Lisp, and as a consequence it has a sprawling set of vocabulary. For some peculiar reason, people seem to like it that way. So, naturally speaking, I guess Lisp is probably going to be more popular than Haskell.

Guys, it produces different results, check yourself

    Prelude> take 5 . filter even . drop 3 $ [1, 2, 3, 4, 5, 6, 7, 8, 9]

    (remove-if-not #'evenp '(1 2 3 4 5 6 7 8 9) :count 5 :start 3)
    (1 2 3 4 6 8)

I write simple lisp functions that are not "configured" as this guy is saying. Not sure I agree. The philosophical difference he talks about seems more of a paradigm choice than a inherent language construct

I'm a little bit confused about the notion that composition isn't used much in lisp as opposed to Haskell.

Isn't composition literally the benefit of S-expressions, prefix notation, and everything being a function?

Lisps lack implicit currying, so function composition becomes much more verbose.

And functions aren't that important in CL, not everything is a function and CL is usually written in an OOP or procedural style. CL is more about data and AST composition then anything else.

Moreover, CL is a lisp-2, so in a very real sense, functions are second class citizens in the language.

Or, you use reader macros to make currying concise.


Functions are not second class citizens. They are privileged with their own namespace. That's a perk, not a penalty.

One notable form of composition in CL is method combination. Does any other language support something like that?

Having to prefix functions with #' is a perk? No other language uses a separate name space for functions, and for good reason, because it sucks.

It's an optimization for usability. Many more symbols occur as the heads of list forms than appear after #'. So, the common case is optimized for clarity and the relatively uncommon case requires two extra characters.

I completely disagree that it sucks. What sucks is having to remember not to have your variable names collide with your function names (or indeed any of the other namespaces in CL). Things that are usually used in different ways benefit from having separate namespaces.

Unrelated to the main subject, but I wonder why in the shell example the user is asked to write all the numbers from 1 to 10 (via cat!), when `seq 1 10` is equally expressive, and much more coincise.

I think it is much because Common Lisp standard library was written in the keywords-fashion. I assume it could have been written also using small composable functions. No?

It's a shame that that blog post's buggy Haskell the breaks the rhetorical argument has been live for 5 years and never corrected or withdrawn.

This is really just talking about which functions they decided to include in their standard library. Clojure for example takes the haskell approach so its not a question of composability vs monolith.

Sure composability is nice and should probably be the way you start creating something but after you are satisfied that it works the way you want it to and you want to make it more efficient you will go monolith and treat this new thing as a bigger piece you can compose with.

Someone new can the come along, fail to understand why its a monolith and decide to make it better my making it made up of composable parts again.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact