
A philosophical difference between Haskell and Lisp (2015) - behnamoh
https://chrisdone.com/posts/haskell-lisp-philosophy-difference/
======
avindroth
There is a strange... kind of poetry with Haskell. It is like math on wheels,
math applied to procedures, math with... time.

Its appeal to me is like the appeal of math to me, not like real analysis math
but abstract algebra math. The beauty, the purity of mathematics of younger
days that once became lost after encountering the sad complexities of the
world. Understanding every little aspect and being able to prove every part is
now a luxury, and interfacing without real understanding is the more practical
approach in the turbulent waters of poorly connected technological and social
systems.

But still there is hope and there are dreams. We like to drench ourselves in
dream qualia sometimes, and Haskell and pure math are that medium. The
abstractions of it, the consistency of it, the purity of it... When Haskell is
called a pure language, it almost goes beyond the static definition of
functions being pure, and describes the general feeling that occurs when
writing Haskell. You feel pure. You feel like you are taking these small parts
and creating greater parts in an elegant buildup of abstractions, traversing
one level higher and one level lower at your whim.

Lisp... maybe it’s the parentheses, maybe it’s something else... it never
really caught onto me like Haskell did. Haskell feels pure and dream-like and
perhaps unsuited to the world where (if you really get down to it)
abstractions and types are just useful ‘human’ inventions and unfit for every
usage. The world is for getting down and dirty, and mathematics, or at least
the pure side of it, really isn’t. The representative mathematics of Haskell
is Category Theory, and it is just as far from the level of “real” as it can
be. More abstract than abstract algebra, if you will.

Abstraction itself is an intellectual operation that is also rooted in
emotional detachment. Perhaps Haskell represents that kind of ideal in a
modern world where practicality pays before purity.

~~~
enriquto
I followed a (pure) mathematics education, and I totally dig lisp. On the
other hand, I abhor haskell and hate almost everything about it!

~~~
dan-robertson
I also followed a pure maths education. I like lisp (in the CL/emacs lisp
family more than scheme which tends to have smaller composable functions) and
Haskell. I find some parts of Haskell culture/styles to be quite silly
however, and fewer parts of the lisp community to be silly. But that may just
be because of Haskell’s greater popularity.

~~~
LesZedCB
is haskell more popular now? what about clojure?

~~~
eru
Perhaps greater memetic popularity?

------
jetrink
This is interesting to read having worked with Clojure, but never Haskell or
CL. I expected the Haskell examples to look alien and the CL to look familiar,
but the idiomatic Clojure solutions to the examples are almost identical to
the Haskell solutions. E.g.

    
    
        take 5 . filter (not . p) . drop 3
    

becomes

    
    
        (->> s (drop 3) (filter (complement p)) (take 5)) ; for some sequence s
    

I think it is also true of Clojure that it strives to have many small
functions with a high degree of composability.

~~~
behnamoh
Is there something similar to (->>) in Haskell?

Edit: corrected “—>”.

~~~
nimih
Assuming that (-->) is either a typo of Clojure's ->> or ->, or some CL macro
that operates the same (I don't know any CL, admittedly), in Haskell there is
the somewhat common & (as in, available in the standard library and the
widely-used lens library, but not part of the prelude and a lot of folks seem
to eschew it in favor of composing in "the other direction" with . and $),
which is defined as `flip ($)` (i.e. reverse function application) and
visually ends up working the same as ->>:

    
    
        myList
        & map someFunction
        & filter somePredicate
        & sum
    

Other ML-ish languages typically provide this operator but under a different
name, |> being a common one. One important note is that the mechanics here are
a bit different than threading macros in lisp, since you're explicitly
building the AST you want and using operator precedence (and
implicit/automatic currying) to get the visual effect, rather than calling a
rewrite rule. This means you can't have a straightforward reproduction of
Clojure's -> macro, although in practice this never really matters, and when
it does you can still fake it with flip (or, more readably, parens and a
lambda).

~~~
eru
If you really want to, Haskell also has a few macro systems. They are just not
used nearly as much as in Lisp.

------
ssivark
When I was new to Haskell (and also not an experienced programmer), the
following example on “functional style” had a huge positive impact on me --
where the same program is re-written in ten different ways, each time making
it more modular & composable:
[http://yannesposito.com/Scratch/en/blog/Haskell-the-Hard-
Way...](http://yannesposito.com/Scratch/en/blog/Haskell-the-Hard-Way/)
(section 3.1... Feel free to ignore everything before/after that)

It feels so much easier to both understand and modify the evolved version of
the code.

~~~
jlg23
Thanks a lot, this is a link to hand to aspiring FP programmers for the
content is well presented.

This one made me laugh out loud: "This is sometimes called parametric
polymorphism. It’s also called having your cake and eating it too."

------
ajarmst
This seems to imply that extensive use of composition in Haskell is a notable
stylistic decision rather than an entailed effect of the fundamental design
choices (first-class functions, lazy evaluation, type classes) that explicitly
encourage such composition and a terse, mathematical syntax that makes the
composition operation obvious.

Heavy use of Composition is idiomatic in lisps, too. It’s just the syntax
doesn’t rub your face in it: It’s pretty common to see form in lisp that ends
in a dozen close-parentheses as we see chains of forms being passed as
arguments to other forms. It would be weird if composition _wasn’t_ idiomatic
in a language with first-class functions.

The loop macro isn’t good evidence: it’s notable precisely because of its
difference from typical lisp usage. I’ve heard lisp programmers claim to
refuse to use loop because it clashes with the rest of the language. I’m less
interested in purity, and it is so convenient sometimes...if only as a clear
illustration of just how much power and potential for abuse the CL macro
system provides. Amd let’s just leave CLOS and metaobject protocols under the
tarp for now :-)

~~~
eru
> It’s pretty common to see form in lisp that ends in a dozen close-
> parentheses as we see chains of forms being passed as arguments to other
> forms.

That's just a chain of function applications.

Function composition puts the emphasis on the fact that functions are values
in their own right and can be manipulated (eg with a composition operator) as
objects without regard to any values they might act on.

If you want to see one Haskell equivalent of loop,
[https://hackage.haskell.org/package/monad-
loops-0.4.3/docs/C...](https://hackage.haskell.org/package/monad-
loops-0.4.3/docs/Control-Monad-Loops.html) has got you covered.

~~~
ajarmst
I’m not convinced that chain application and composition are meaningly
distinct in anything but a syntactic sense if the language in question offers
first-class functions and deterministic evaluation (eg no side effects). In
mathematics, composition ($f \circ g$) is invariably described in terms of
chaining f(g())—in fact, the composition operator is usually defined as alias
for chaining. Yes, there are subtleties I’m glossing, but I don’t think
support the thesis that function chaining and composition are significantly
different at this level (i.e. the level of syntax and idiom rather than the
level of Category Theory).

~~~
eru
Mostly agreed.

John Backus seemed to think it was a big deal. See eg 'Can Programming Be
Liberated from the von Neumann Style? A Functional Style and Its Algebra of
Programs '
([http://www.ict.nsc.ru/xmlui/bitstream/handle/ICT/1256/1977-b...](http://www.ict.nsc.ru/xmlui/bitstream/handle/ICT/1256/1977-backus.pdf))

On the one hand, I do agree that in a language like Haskell it's almost
entirely a syntax level distinction.

On the other hand, syntax level conveniences and inconveniences can have a big
impact on how people use a language. (Eg in Haskell if-then-else is more
hassle to type out than pattern matching. A slight nudge by the language
design to make users prefer the latter.)

Once you have acquired the ability to see functions as objects in their own
right, many more techniques become available. Think of eg parser combinators.

Technically you don't need the syntactic convenience to use parser
combinators. But I am not sure they are worth the hassle without those
conveniences. Eg Python has enough functional machinery to support parser
combinators, but its options for defining functions (def and lambda) are just
so cumbersome.

------
lispm
The LOOP macro is actually a very different approach: it's an embedded domain
specific language for iteration tasks. The language is thought to be more
natural (-> 'conversational') than formal -> an actual influence from
Interlisp, where this macro originally came from.

If we talk about the approach of using larger building blocks with named
parameters, that's also typical in UNIX, since most UNIX commands have
zillions of named arguments, even more so than typical Common Lisp functions.
Typically it is easier in using interactive command languages to fill out
common options to a command, than to write programs.

For example 'ls' does not expose a bunch of functions to combine, but is a
powerful command with lots of named options:

[https://man7.org/linux/man-pages/man1/ls.1.html](https://man7.org/linux/man-
pages/man1/ls.1.html)

Named parameters/arguments are not generally common in Lisp. Emacs Lisp did
not have them and wanted to avoid them explicitly.

------
wrs
Common Lisp doesn’t just have LOOP, it also has Series [1]. I think the actual
philosophical difference may be that Haskell does this with compiler support
for stream function composition, whereas Common Lisp does it with a set of
language-level macros any user could write (if they were as smart as Richard
Waters).

[1]
[https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node347.html](https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node347.html)

~~~
dan-robertson
Strictly speaking, Common Lisp does not have series because they were not
included in ANSI Common Lisp. Therefore even though you could use the macros
provided with a separate streams package, you’d need to make sure that all
your code used that package (rather than the CL definitions of let and lambda
and so on) and properly declared stream functions. This ends up making code
using streams not super composable which isn’t great.

I also think it’s a bit unfair to say that Haskell’s stream fusion requires
special compiler support. The compiler doesn’t recognise list functions
specifically or have special cases for list-looking-types (except for syntax).
Instead there is a general mechanism to say “at stage number x, replace
expressions that look like y with z”, where there are something like 10 stages
and expressions y and z are type checked so these transformations are only
applied in “valid” cases. I think saying that this is special compiler support
for stream fusion is like saying that macros are special compiler support for
series in CL.

A final thing to note is that because of lazy evaluation, stream fusion in
Haskell doesn’t change the semantics of the Haskell list functions from how
they would behave without it. In Common Lisp streams must be explicitly
separated from lists because they are semantically different (in the
evaluation order of the functions that act on them).

~~~
reikonomusha
GHC _does_ have special support for fusion and rewriting. See e.g. [1]. The
laziness comes with the language, but turning it efficient requires a series
of these rewrite rules expressed as GHC-specific metadata.

[1] [https://markkarpov.com/tutorial/ghc-optimization-and-
fusion....](https://markkarpov.com/tutorial/ghc-optimization-and-
fusion.html#rewrite-rules)

~~~
sukilot
GHC is the most complicated compiler of any language (or very near to it),
except perhaps Mathematica which is similar:

It's an optimizer for a graph-reduction engine.

It doesn't mean much to call out "special support" for any one bit of style of
programming, since the _language_ itself has nearly no concept of performance
-- you just write functional code and the compiler will find the fastest
runtime computation of the evaluation that it can.

~~~
eru
> It doesn't mean much to call out "special support" for any one bit of style
> of programming, since the language itself has nearly no concept of
> performance -- you just write functional code and the compiler will find the
> fastest runtime computation of the evaluation that it can.

While GHC is glorious, it's still very limited in what it can do, performance
wise.

You are right that languages themselves seldom have a direct concept of
performance. But language features and restrictions have a big impact on
achievable performance. Eg Haskell's purity-by-default means that the compiler
has quite a bit of freedom when choosing how to translate something like eg
the 'map' or 'filter' functions.

The equivalent in C++ gives the compiler less flexibility, because the helper
function handed over at runtime might have side-effects.

Similarly, Haskell functions are still allowed one important side effect: non-
determination. In a more restricted language like Agda that's not allowed.
Thus giving the compiler more degrees of freedom to work with.

Or in a different direction: the query optimizers for SQL can do very
impressive work, because SQL is so limited.

------
wirrbel
Its a great article, unfortunately it barely mentions Lisp's function
composition capabilities and thats a bit unfortunate because it makes I guess
a nice case for differences in API design in Haskells standard library and
Common Lisp's.

Lisp did in fact pioneer the introduction of higher-order functions and
function composition into programming language and its fairly remarkable that
lisps are still around and highly versatile / usable.

What's being shown are Collection
Pipelines([https://www.martinfowler.com/articles/collection-
pipeline/](https://www.martinfowler.com/articles/collection-pipeline/)) which
can be implemented in lisp and in fact are. Clojure and Scheme for sure have
these procedures and I am sure they are available for common lisp as well.

Whether one prefers Haskell or Lisp is probably in parts a subjective decision
(and also has to do with socialisation). I think I am falling onto the lisp
side of things because of the explorative nature of programming and I had my
heureka moment of programming with Test-Driven Development and for some reason
I am gravitating to that in my programming and its easier to do TDD without a
type-checker actually.

For anyone who is curious about exploring Lisps, I suggest having a look at
Clojure or Common Lisp (potentially Dylan if you don't want to bother with
parentheses) and not Scheme (because it doesn't have polymorphism and is
heavily fragmented).

------
dllthomas
Uh guys?

    
    
        [1]> (remove-if-not #'evenp `(1 2 3 4 5 6 7 8 9 10) :count 3 :start 1)
        (1 2 4 6 8 9 10)
    

As pointed out by owl57, the Haskell translation is incorrect and the correct
implementation isn't quite so trivial. I don't see a way to pull the :count
argument out into a separate function. We certainly can for skip, though, and
I might.

    
    
        Prelude> skipThen 1 (removeIfNot even 3) [2,3,4,5,6,7,8,9]
        [2,4,6,8,9]
    

as implemented:

    
    
        removeIfNot p count = go count
          where
            go _ [] = []
            go 0 list = list
            go n (x:xs) | p x = x:go n xs
                        | otherwise = go (n - 1) xs
    
        skipThen count f list =
            let (pre,suf) = split count list
             in pre ++ f suf

~~~
nybble41
You could decompose removeIfNot a bit further by representing the intermediate
stages as pairs of lists (result + unprocessed input):

    
    
        import Data.Bifunctor (bimap, first)
    
        type Split a = ([a], [a])
        
        unsplit :: Split a -> [a]
        unsplit = uncurry (++)
    
        splitStart :: [a] -> Split a
        splitStart xs = ([], xs)
    
        skip :: Int -> Split a -> Split a
        skip n (xs, ys) = first (xs ++) (splitAt n ys)
    
        iterateN :: Int -> (a -> a) -> a -> a
        iterateN n f = (!! n) . iterate f
    
        removeOneIfNot :: (a -> Bool) -> Split a -> Split a
        removeOneIfNot p (xs, ys) = bimap (xs ++) (drop 1) (span p ys)
        
        GHCI> unsplit . iterateN 3 (removeOneIfNot even) . skip 5 . splitStart $ [1..15]
        [1,2,3,4,5,6,8,10,12,13,14,15]

~~~
dllthomas
Very nice! I had a bit of trouble convincing myself that it has the laziness
I'd want out of it, but by experimentation it seems to.

Still, definitely a more complicated decomposition than that described in the
article (but maybe a more interesting article!)

~~~
nybble41
With one more combinator we can decompose this slightly further:

    
    
        stepSplit :: ([a] -> Split a) -> Split a -> Split a
        stepSplit f = uncurry first . bimap (++) f
        
        skip n = stepSplit (splitAt n)
        
        removeOneIfNot p = stepSplit (second (drop 1) . span p)
    

Or in perhaps-more-familiar monadic terms:

    
    
        import Control.Monad (replicateM_)
        import Control.Monad.Trans (lift)
        import Control.Monad.Trans.State (StateT, evalStateT, state, get)
        import Control.Monad.Trans.Writer (WriterT, execWriterT, tell)
        import Data.Bifunctor (second)
        import Data.Functor.Identity (Identity, runIdentity)
    
        type SplitterT b m = WriterT [b] (StateT [b] m)
        type Splitter b = SplitterT b Identity
    
        splitter :: Monad m => ([b] -> ([b], [b])) -> SplitterT b m ()
        splitter f = lift (state f) >>= tell
    
        execSplitterT :: Monad m => SplitterT b m a -> [b] -> m [b]
        execSplitterT m = evalStateT (execWriterT (m >> lift get >>= tell))
    
        execSplitter :: Splitter b a -> [b] -> [b]
        execSplitter m = runIdentity . execSplitterT m
    
        skip :: Monad m => Int -> SplitterT b m ()
        skip n = splitter (splitAt n)
    
        removeOneIfNot :: Monad m => (b -> Bool) -> SplitterT b m ()
        removeOneIfNot p = splitter (second (drop 1) . span p)
    
        GHCI> execSplitter (skip 5 >> replicateM_ 3 (removeOneIfNot even)) [1..15]
        [1,2,3,4,5,6,8,10,12,13,14,15]
    

It's hard to say which version is clearer. It probably depends on what you're
used to. The code is essentially the same under the Writer/State abstraction,
with `execSplitter` replacing both `unsplit` and `splitStart` and monadic bind
in place of function composition. The `Splitter b ()` type is isomorphic to
the `[b] -> ([b], [b])` used for the argument to `stepSplit` in the first
version.

~~~
dllthomas
_" Whatever happened to the Popular Front, Reg?"_

------
GuB-42
A small note: composition is not better than monolithism. The article makes it
feel like it is but it is not. They are two perfectly valid approaches when it
comes to solving problems. Often a mix of both is used.

Monoliths tend to be better at solving common real life problems efficiently
while composition is better suited for unexpected abstract problems. We need
both.

For example GNU binutils are not very "unixy", with plenty of "monolithic"
options but they still work well with pipelines.

~~~
xixixao
I did find it ironic that composition was called UNIX philosophy, when most
UNIX tools are complicated functions with many options much more akin to the
lisp function examples. (Of course it's the "UNIX pipes" part of the
philosophy that's being alluded to, but it's still a funny twist)

~~~
sukilot
Yeah and the US is a democracy, and Christians are Christlike. We don't always
live up to our ideals.

------
devin
I was a bit puzzled. Clojure seems to provide both avenues.

The "Common Lisp" version:

    
    
      (for [i (range 1 10)
            :when (even? i)
            :while (< i 5)]
        i)
    

A Haskell-y version (according to the article):

    
    
      (->> (range 1 10)
           (take-while #(< % 5))
           (filter even?))
    

Using Clojure's transducers:

    
    
      (into [] (comp
                (take-while #(< % 5))
                (filter even?))
            (range 1 10))

------
dleslie
There's a more fundamental philisophical difference:

Common Lisp promotes the use of doclines and apropos tools in order to
discover and understand behaviour, with symbol names that tend to describe the
function of what they represent.

Haskell leans on type signatures and symbolic operators.

IMHO, a pseudo-random sequence of non-word symbols tied to a type signature is
of very little use.

IE, something fabricated:

    
    
        (-*/) :: A -> M c -> B
    

Ah yes, so understandable.

~~~
vore
People who write Haskell don't just leave their symbolic operators
undocumented. It's definitely heavy on the symbols but that and type
signatures are certainly not the only tools Haskell devs have.

~~~
dllthomas
On the one hand, the parent's comment is certainly hyperbole.

On the other hand, it's also the case that too any Haskell projects are under-
documented. Probably worse than the typical language, and certainly worse than
the best-in-class.

On the gripping hand... while types make poor documentation, they are _always
correct_ documentation, and they are automatically generated documentation.
When I write Python or (apparently) Common Lisp, I can expect to get
reasonable documentation by interrogating anything someone hands me. In
Haskell, insofar as I have learned to get good information from the types, I
get some (correct!) documentation for things that I have newly assembled
myself!

~~~
newen
I find type information for functions incredibly useful. I get so lost in
Python code once it get past a certain size even after years of Python
programming experience because of the its lack of ability to put a useful
amount of structure into the code. After having programmed in Haskell for a
while, I just miss tabbing a function and being able to see its type
signature.

~~~
dleslie
That, and more, is available in common lisp.

    
    
        * (describe #'+)
        #<FUNCTION +>
          [compiled function]
        
        
        Lambda-list: (&REST SB-KERNEL::NUMBERS)
        Declared type: (FUNCTION (&REST NUMBER) (VALUES NUMBER &OPTIONAL))
        Derived type: (FUNCTION (&REST T) (VALUES NUMBER &OPTIONAL))
        Documentation:
    
          Return the sum of its arguments. With no args, returns 0.
    
        Known attributes: foldable, flushable, unsafely-flushable, movable, commutative
        Source file: SYS:SRC;CODE;NUMBERS.LISP
    

Docstrings and type information can (optionally, but recommended) be added to
all code. Users of your API can use documentation/describe to discover
information about it without ever having to look at the implementation.

------
dan-robertson
I think it should be stressed that most languages (except maybe bash I guess)
can’t provide the kind of compositionality that Haskell does.

This is because Haskell’s lazy evaluation allows these compositions to do an
asymptotically appropriate amount of work. Consider this Haskell:

    
    
      xs = [1..20]
      p n = if n < 13 then even n else p n
      ys = take 5 . filter p . drop 3 $ xs
      main = print ys
    

If a strict language were being used then the processing would be as follows:

1\. drop 3 ys is [4..20] 2\. When computing filter p [4..20], everything is ok
up to 12, but computing p 13 goes to an infinite loop. 3\. We never get a
result to compute take 5 of.

Because Haskell is lazy, it will not try to compute p 13 and so can do the
right amount of work.

Perhaps a more reasonable example would be that if the list xs were really
long, a strict program would have to evaluate p against every element before
taking the first 5 results (so it would take linear time if p were constant
time) while a lazy program would only need to compute p on enough elements to
collect 5 results.

Common Lisp was designed for older, slower computers than Haskell and to be
able to make programs which were reasonably efficient. It also had strict
evaluation and so with this context, the extra arguments to many functions for
special cases (which lots of people didn’t like) or the combined iteration of
loop (which lots of people didn’t like) make a lot more sense.

~~~
sukilot
Since bash and the Unix core utils are written in C, I strongly challenge your
claim that "most languages" can't do stream processing. You don't have to have
a lazy language to implement a lazy data type.

~~~
bollu
The difference is between pervasive laziness (laziness by default) v/s
laziness by opt-in.

If you have laziness by opt-in, then most parts of the language will __not
__opt-in [from prior experience with, say, racket which also has lazy
streams].

It's the pervasive and efficient compilation of laziness in Haskell which
makes it actually useful to write lazy programs.

Now, am I a fan of writing lazy programs? No, I find the upshot of referential
transparency gained by laziness to be overshadowed by the warts it brings ---
lack of a good debugging experience, tie-the-knot semantics being fragile,
etc. I'd rather live in a strict total programming language.

But one cannot "handwave" away laziness in haskell that easily by saying "you
can emulate it in $STRICT_LANGUAGE". By the exact symmetric argument, one can
emulate strict code inside haskell: just use `ST` or `IO`. But very rarely
does one see whole libraries written strictly, because this style of code
doesn't compose with the (default) laziness and purity in haskell.

------
schemy
The main difference is that in lispy languages you wouldn't use the language
itself to solve the problem. You'd extend it. I'd write a function that is
(drop3-take5 test list) which does exactly what it says.

This might seem like huge overkill, but when the specs change and I need to
set the start and stop from a config file I can easily refactor drop3-take5
into (drop-n-take-m n m test list), with a let around it to read the config.

drop3-take5 would look something like:

    
    
        (define drop3-take5-list
          (lambda (ls filter)
            (define internal
              (lambda (ls matches)
                (cond
                 ((null? ls) '())
                 (else
                  (if (filter (car ls))
                      (cond
                       ((< matches 3) (internal (cdr ls) (1+ matches)))
                       ((<= matches 5) (cons (car ls) (internal (cdr ls) (1+ matches))))
                       (else '()))
                      (internal (cdr ls) matches))))))
            (internal ls 0)))

~~~
vore
This seems like an unsustainable approach: will you need to write these
bespoke functions for any combination of behaviors you would need?

~~~
schemy
Lisp is about writing a dsl that solves your problem. It just happens to be an
s-expression dsl. Exactly how it works depends on the problem domain. But in
my experience a lisp dsl is much less brittle than trying to map the problem
into a haskell program, especially with a changing spec.

This is a bad example because it is so contrived to show off the strengths of
haskell.

As an example for the strengths of lisp: I wrote a logical dsl that adds a
boolean dependent type system to scheme using macros. It's 400 lines long and
copies all the bits from Idris I needed and it still feels like a scheme and
is fully interoperable with regular scheme.

------
owl57
The remove-if-not example is pretty funny. No, it doesn't have any obvious
translation into function composition, and in particular isn't equivalent to
the Haskell example. Instead, :count specifies _how many elements to remove_
and :start specifies _when to start removing_.

------
agumonkey
I wonder if the fat macro eDSL wasn't just a temporary trend in lisp. Some
lispers like small and composable.. after all that was part of the reason
behind scheme. picolisp is also very combinatorics in mindset. Clojure likes
that too (even though it came after the fp revival).

------
nabla9
Deeper philosophical differences:

* Lisp is multi-paradigm languages. Haskell is pure functional language.

* Haskell wants to be mathematics. Lisp wants to be an operating system.

I believe the composition vs. monolithism difference comes from these deeper
differences.

~~~
lispm
> Lisp wants to be an operating system.

Lisp wants to be interactively used.

------
sillysaurusx
I dislike the lisp convention of remove-if, remove-if-not, etc. Filter and
take isn't much better. The python convention is much nicer:

    
    
      [x for x in foo if x not in y]
    

Turns out, this can be expressed in Lisp:

    
    
      (list x for x in foo if x not in y)
    

I've implemented this into my fork of Lumen that runs on Python.
[https://github.com/shawwn/pymen](https://github.com/shawwn/pymen) It feels
very nice to use:

    
    
      > (list x for x in (range 10) if (= (% x 2) 0))
    
      """
      Built-in mutable sequence.
    
      If no argument is given, the constructor creates a new empty list.
      The argument must be an iterable if specified.
      """
      [0, 2, 4, 6, 8]
      >
    

It wasn't even that ugly to implement. It's ~20 lines:
[https://github.com/shawwn/pymen/blob/1c191c9e00e73303a6479f8...](https://github.com/shawwn/pymen/blob/1c191c9e00e73303a6479f8f56ecb0ba84bb1566/macros.l#L27-L55)

One other neat thing is that the REPL prints out the docstring of the last
evaluated thing by default. It makes it nice to explore libraries:

    
    
      > (import numpy as np)
    

[https://imgur.com/1rpF95y](https://imgur.com/1rpF95y)

    
    
      > np.vsplit
    

[https://imgur.com/OjIvTWt](https://imgur.com/OjIvTWt)

It's like an automatic help() call on the last evaluated thing.

(If anyone happens to try it out, you can get into a REPL by running `rlwrap
bin/pymen`, or just `bin/pymen`.)

~~~
garethjrowlands
For those that didn't know, Haskell does of course have list comprehensions:

    
    
        [x| x <- foo, not (elem x y)]
    

Here's an example finding consonants:

    
    
        [c| x <- ['a'..'z'], not (elem x) $ "aeiou"]

~~~
centimeter
No need for the dollar sign in the second example :)

A nice thing about Haskell list comprehensions is they're based on a trivial
transformation to monad syntax, which makes it easier to factor out complex
list transformations into multiple little bites. I've run into this problem in
Python. Also, Python list comprehensions can often behave very unexpectedly
due to mutability.

Here's an example of the monad syntax:

    
    
        [(x,y) | x <- [1..10], y <- [1..x], odd (x + y)]
    

equiv

    
    
        do
          x <- [1..10]
          y <- [1..x]
          guard $ odd (x + y)
          return (x,y)
    

Lots of combinatorics problems can be elegantly solved using the list monad
like that.

~~~
mypalmike
Example where Python list comprehensions behave very unexpectedly...?

------
jim-jim-jim
I've noticed that these differing approaches lead to argument order being
flipped sometimes.

[https://wiki.call-
cc.org/man/5/Module%20(chicken%20string)#s...](https://wiki.call-
cc.org/man/5/Module%20\(chicken%20string\)#string-split)

Because Chicken's `string-split` expects its delimiter to be optional, it
wants a string first.

[https://hackage.haskell.org/package/text-1.2.4.0/docs/Data-T...](https://hackage.haskell.org/package/text-1.2.4.0/docs/Data-
Text.html#v:splitOn)

While Haskell's emphasis on composition means it's more logical for the
constant parameter—the delimiter—to be provided first.

It's not a great example, but the first one I can think of. The difference is
starker when the Lisp command has many more parameters than the Haskell one,
with some of them being optional. When trying to write cute point-free code in
Scheme, I found myself using `flip` too much for it to be fruitful.

~~~
thomasahle
I hate when I have to do flip. Dislike reading it even more. I don't know what
the solution is, other than using points..

------
quickthrower2
I am less experienced than OP, but I got the impression the philosophical
difference is that with Lisp, you are trying to abstract things with a simple
language and AST code generation (macros) whereas in GHC Haskell you are
trying to abstract things with a very in depth language, heavily based on
mathematical ideas (see all those language extensions!).

~~~
eru
Yes. Though in theory you could use Haskell with a Lisp-like syntax just fine.

Many of the language extensions are sugar over a simpler (implied) core of the
language. Some are purely syntactic, like NumericUnderscores. Some others like
GADTs are semantic, but still mostly explained in terms of mapping to this
simpler language.

------
mtraven
> One difference in philosophy of Lisp and Haskell is that the latter makes
> liberal use of many tiny functions that do one single task. This is known as
> composability, or the UNIX philosophy.

Have you ever looked at a Unix man page?

And while Common Lisp has some functions that might be swiss army knives of
functionality, that is not "the philosophy of Lisp", it's a particular style.
Clojure and Scheme are just as much Lisp as CL, and as other people have
noted, their style is very similar to Haskell.

------
etangent
When I started out with Python, I was learning Haskell in parallel. As a
result, my early Python code ended up being influenced by Haskell (I wrote
many small functions that performed a single task). But it was somewhat
frustrating. Later, I developed my Python style and began heavily relying on
optional arguments through `kwargs` and duck typing. That made my programs
much more effective and easier to reason about. Now I realize that my later
Python code became more Lisp-like.

------
seisvelas
Reminds of the difference between Unix and GNU. I wonder to what extent RMS's
Lisp background influenced GNU's kitchen sink tendencies.

------
mark_l_watson
I really enjoy programming (a lot!) in both Common Lisp and Haskell, and I
agree with his comments on both languages.

Chris and I probably differ in that I love the kitchen sink built in
functions. Also, it is also common to write lots of very short Common Lisp
functions.

------
m12k
Any particular reason why Lisp doesn't also tend to compose with smaller
functions? Did it just happen to evolve that way? Is it easier to get the
types of the smaller functions right when you have a compiler to help you like
you do in Haskell?

~~~
layoutIfNeeded
For one Lisp is not lazy like Haskell.

~~~
howling
The article elaborates on this point a bit:

> Like pipes in UNIX, the functions are clever enough to be performant when
> composed together–we don’t traverse the whole list and generate a new list
> each time, each item is generated on demand. In fact, due to stream fusion,
> the code will be compiled into one fast loop.

I imagine it is hard to do the same optimization in a non-lazy language.

~~~
klipt
It's interesting that he says you also need purity to do the optimization, but
Unix pipes are not necessarily pure.

~~~
garethjrowlands
Unix pipes don't do stream fusion, which is the optimisation to which he was
referring.

~~~
klipt
So if I put (x1, x2...) through pipes f and g, instead of calculating all the
f(x)s first followed by all the g(f(x))s, it calculates g(f(x1)), then
g(f(x2)), etc.

How is that order of operations different to stream fusion? Why does stream
fusion require more purity than Unix pipes?

~~~
newen
Let's say there's a global mutable variable M (doesn't have to global, can be
x1.M = M, x2.M = M, for example, but easier to think with global) which is
used by and modified when f(x) or g(y) runs. Then, the order of operations
would matter and so you get different results with and without stream fusion.

~~~
klipt
Right, but couldn't the same thing happen with Unix pipes if the components
write to a global file?

You can do the same optimization Unix pipes does without language enforced
purity, just with a caveat that functions sharing state might behave weirdly
when fused.

~~~
newen
You could... I guess language designers generally avoid those types of
optimizations since it can bring about unexpected results. As in, if
syntactically the code is using one order of operations and semantically the
code is using another order of operations, even if it's documented, lots of
programmers would stumble into difficult to find bugs and be surprised by this
behavior. Or worse, just get wrong results and never even find out it's wrong.

------
tester89
> One difference in philosophy of Lisp (e.g. Common Lisp, Emacs Lisp) and
> Haskell is that the latter makes liberal use of many tiny functions that do
> one single task. This is known as composability, or the UNIX philosophy.

This honestly seems like a misunderstanding which ignores how many POSIX
commands actually accept commands. The uni philosophy doesn’t mean literally
one thing, but rather one task; sometimes the task will have different
parameters and that’s OK, but we’re not going to have `ls` takeout the trash.

------
karmakaze
> One difference in philosophy of Lisp (e.g. Common Lisp, Emacs Lisp) and
> Haskell is that the latter makes liberal use of many tiny functions that do
> one single task. This is known as composability, or the UNIX philosophy.

I would rather say that each tiny function handles one factor and that a
program is the decomposition into factors. Lisp also composes functions but
are chunkier and can be ad-hoc one-offs. Static types vs runtime errors is
another point of difference.

------
sxp
Javascript and Python have similar capabilities, but they're missing the
stream fusion optimization mentioned at [https://chrisdone.com/posts/stream-
composability/](https://chrisdone.com/posts/stream-composability/)

Javascript:

    
    
      a = [...Array(20).keys()]
      p = x=>!(x%2);
      a.slice(3).filter(p).slice(0,3)
    

Python:

    
    
       a = range(20)
       p = lambda x: not x%2
       filter(p, a[3:])[:3]

~~~
sergeykish
The code is not equivalent to Common Lisp (neither is Haskell version), it
should remove `count` of elements from the `start` matching a predicate

    
    
        (remove-if-not #'evenp '(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15) :count 5 :start 3)
        (1 2 3 4 6 8 10 12 14 15)
    
        a = (1..15)
        r = []
        c = []
        a.each.with_index do |e, i|
          if i <= 3
            r << e 
          elsif c.length < 5
            if e.even?
              r << e
            else
              c << e
            end
          else
            r << e
          end
        end
        r
        #=> [1, 2, 3, 4, 6, 8, 10, 12, 14, 15]
    

I can see no simple way to compose this

    
    
        a = (1..15)
        l = a.take(3)
    
        c = 0
        m = a.drop(3).map.with_index { |e, i| [e, i, e.even?] }.take_while { |e, i, p| c +=1 unless p; c <= 5 }
        _, last_m_index, _ = m.last
        m = m.filter { |e, i, p| p }.map { |e, i, p| e }
    
        r = a.drop(3 + last_m_index + 1)
        l + m + r
        #=> [1, 2, 3, 4, 6, 8, 10, 12, 14, 15]

------
eru
> Having written my fair share of non-trivial Emacs Lisp (and a small share of
> Common Lisp; I’ve maintained Common Lisp systems) and my fair share of non-
> trivial Haskell I think I’m in a position to judge.

While I prefer Haskell, judging Lisps by Emacs Lisp isn't entirely fair. It's
probably the worst Lisp in somewhat wide use.

('newLisp' is worse, of course. But thankfully no one in their right mind uses
it.)

------
kazinator
The main difference is that Haskell is a very specific language, whereas Lisp
is a family. Not all the members of the family are based on exactly the same
philosophy.

The _remove-if-not_ function just a Common Lisp thing.

And, by the way, at the bottom of its description in the standard, we find
this:

 _The functions delete-if-not and remove-if-not are deprecated._

~~~
pfdietz
That deprecation is meaningless, since there is no standard body to revise the
standard, and users are perfectly willing to continue to use those functions.

------
euske
I tend to think that most natural languages use the same kitchen sink approach
like Lisp, and as a consequence it has a sprawling set of vocabulary. For some
peculiar reason, people seem to like it that way. So, naturally speaking, I
guess Lisp is probably going to be more popular than Haskell.

------
sergeykish
Guys, it produces different results, check yourself

    
    
        Prelude> take 5 . filter even . drop 3 $ [1, 2, 3, 4, 5, 6, 7, 8, 9]
        [4,6,8]
    
        (remove-if-not #'evenp '(1 2 3 4 5 6 7 8 9) :count 5 :start 3)
        (1 2 3 4 6 8)

------
_prototype_
I write simple lisp functions that are not "configured" as this guy is saying.
Not sure I agree. The philosophical difference he talks about seems more of a
paradigm choice than a inherent language construct

------
ACow_Adonis
I'm a little bit confused about the notion that composition isn't used much in
lisp as opposed to Haskell.

Isn't composition literally the benefit of S-expressions, prefix notation, and
everything being a function?

~~~
smabie
Lisps lack implicit currying, so function composition becomes much more
verbose.

And functions aren't that important in CL, not everything is a function and CL
is usually written in an OOP or procedural style. CL is more about data and
AST composition then anything else.

Moreover, CL is a lisp-2, so in a very real sense, functions are second class
citizens in the language.

~~~
pfdietz
Or, you use reader macros to make currying concise.

[https://github.com/eschulte/curry-compose-reader-
macros](https://github.com/eschulte/curry-compose-reader-macros)

Functions are not second class citizens. They are privileged with their own
namespace. That's a perk, not a penalty.

One notable form of composition in CL is method combination. Does any other
language support something like that?

~~~
smabie
Having to prefix functions with #' is a perk? No other language uses a
separate name space for functions, and for good reason, because it sucks.

~~~
pfdietz
It's an optimization for usability. Many more symbols occur as the heads of
list forms than appear after #'. So, the common case is optimized for clarity
and the relatively uncommon case requires two extra characters.

I completely disagree that it sucks. What sucks is having to remember not to
have your variable names collide with your function names (or indeed any of
the other namespaces in CL). Things that are usually used in different ways
benefit from having separate namespaces.

------
pizza234
Unrelated to the main subject, but I wonder why in the shell example the user
is asked to write all the numbers from 1 to 10 (via cat!), when `seq 1 10` is
equally expressive, and much more coincise.

------
galaxyLogic
I think it is much because Common Lisp standard library was written in the
keywords-fashion. I assume it could have been written also using small
composable functions. No?

------
sukilot
It's a shame that that blog post's buggy Haskell the breaks the rhetorical
argument has been live for 5 years and never corrected or withdrawn.

------
wudangmonk
This is really just talking about which functions they decided to include in
their standard library. Clojure for example takes the haskell approach so its
not a question of composability vs monolith.

Sure composability is nice and should probably be the way you start creating
something but after you are satisfied that it works the way you want it to and
you want to make it more efficient you will go monolith and treat this new
thing as a bigger piece you can compose with.

Someone new can the come along, fail to understand why its a monolith and
decide to make it better my making it made up of composable parts again.

