
How Lisp is Going to Save the World - smartial_arts
http://landoflisp.com/#guilds
======
bjourne
Great story. Unfortunately, the API documentation and tutorial writing guilds
went extinct decades ago and has never been able to return to Lispland. :)

~~~
stcredzero
Seriously, this is a great point. One of the big problems with a small
community of programmers who all think of themselves as elite, is that tasks
like documentation and tutorial writing go by the wayside. This comment should
be a wake-up call to any smart and far-sighted individuals who want to promote
a language.

~~~
calinet6
Yep. They have time to draw elaborate cartoons, but not elaborate
documentation! Funny how that works.

~~~
octopus
The author (Conrad Barski) wrote a book: Land of Lisp ...

------
thecombjelly
I love lisp. I use/used it to build my web service product[1] and anything
else I can.

=how do I do X?=

java: something similar to X is already done in Y. add abstract class and redo
X and write Y. +200 LOC

lisp: something similar to X is already done in Y. realize you can generalize
X and Y into a new pattern and use it for ABC too. -70 LOC

=there is a bug in function X=

java: open X.java. edit line. restart program. X seems to be working
correctly.

lisp: two hours later, ah ha! I understand this code. fix X. write unit test.
eval unit test. X works correctly.

[1] demo: <https://a.keeptherecords.com/demo>, source:
<https://github.com/ThomasHintz/keep-the-records>

~~~
theevocater
look I like lisp as much as the next guy, but you can "spend two hours"
reading java code, write a unit test; eval said test and then woohoo! You can
do that in basically any modern language.

And your first example is something that most modern languages are sold on.
The truthiness of those statements can be debated, but your examples are
flawed to say the least.

Point is: you can generalize and write unit tests in _any_ language.

~~~
thecombjelly
That is true to a degree but you really can not generalize very well at all in
java. You will also be hard pressed to find java code written functionally
enough that you can just run a unit test and know it is working correctly.
Usually you have to instantiate many other classes and stub in test data via
something like guice. There are many places for those bugs to hide in that
type of setup.

------
david927
With the deepest reverence to John McCarthy, I regret to say that Lisp is our
cosmological constant. It creates a static universe that is otherwise
expanding; it reduces the problem to a solvable one and then declares victory.
The truth is, it's all state. All of it.

Remember that movie "Boy in the Plastic Bubble," about a boy with an immune
system deficiency who lives instead a hermetically-sealed, sterile bubble?
That's not the answer. We instead need to create immune systems (read: robust
systems) rather than simply avoiding them.

[I know I'm simplifying things, and this is not meant to be a slight.]

~~~
martinced
_"The truth is, it's all state. All of it."_

The problem is not state. The problem is being able to recreate the state:
both for testing and for business purposes.

The bigger problem is that most programmers, like you, are knee-jerking before
the issue of "recreating the state" and declaring that _"It cannot be done,
because it's all state"_.

And hence we have both languages who are build by considering mutability to be
a virtue and back-end databases (your typical CRUD SQL DB) build upon the same
false premises.

I'll give you one example: monday morning the service desk calls because one
user of your app, at 3:07pm on friday, experienced a bug.

Can you "recreate the state" your application was in at that time as to be
able to figure out what triggered the bug?

You probably can't. Because your DB has changed meanwhile: you're SOL because
the 'U' and the 'D' in CRUD are destructive. It's mutability. It's the ennemy
of determinism.

So now you're stuck calling your DB admin asking for a dump of the prod DB on
last thursday evening and a dump of the log of the transactions that happened
on friday... And you're spending hours and hours trying to recreate the state
the environment was in when the sh _t hit the fan. And you may or may not be
able to do it.

It's just one hypotethical scenario but things like that is the daily lot of
_many* programmers.

But software development, in many cases, shouldn't be that painful. If you
were to use a CRA DB (Create Read Append) and languages favoring immutability
and a more functional approach overall, you'd have a much much easier time
recreating the state.

If you think of it, it's all a gigantic determistic machine.

So why can't we accept that the notion of time is an important one?

Why can't you realize that the battle the likes of Rich Hickey are fighting
are worth it?

It _is_ possible to use programming languages and DBs (or wrappers like
Datomic in front of SQL DBs) that do definitely make it easier to reason about
programs and that make it just so much easier to recreate the state _and_ to
query the past (which has a lot of business value).

Why do you react like this: _"The truth is, it's all state"_. Saying that as
if nothing could be done and as if every single programmer's life should be
Java/C# + ORM + XML + SQL hell?

There are people trying to make our life as devs easier. Why not try to listen
to what their saying?

Is Rich Hickey "seeing things" with Clojure + Datomic?

To me the combination of a functional language (or at least a language that
can be used in a mostly functional way) and a CRA DB (Create Read Append)
which incorporates the notion of time from the start is a godsend to our
industry.

Why do you close your eyes?

~~~
Lapsa
Try googling around for "cqrs". There are many ideas on how to do exactly this
effectively.

All it takes is to swap "write" operation with "event" entity (all is state).
Combine that with intention revealing commands (customer_died instead of
delete from customers where id=1) and you hit jackpot.

End result is nice stream of events which is ultimate source of truth and
enables crazy queries you never thought you would ever need
[http://abdullin.com/journal/2010/6/3/time-machines-should-
su...](http://abdullin.com/journal/2010/6/3/time-machines-should-support-
linq.html)

~~~
neeleshs
This is quite interesting. In practice though, the lags between command/query
models when the datastores are different(almost always, for large data)
becomes a real concern

~~~
mcgwiz
Pardon my pedantry, but CQRS doesn't necessitate eventual consistency.
Denormalization of domain events into can happen synchronously. It all depends
on the needs of the project. If the project information architecture and
processes can support a degree of latency (which is implicit in almost any
technology providing static screens anyway), then the performance gains of
eventual consistency can be realized.

A good, nuanced explanation of CQRS is here:
<http://codeofrob.com/entries/cqrs-is-too-complicated.html>

------
carlesfe
Wow, that went from a supposed linkbait title to a surprisingly amusing and
interesting explanation of Lisp

~~~
smartial_arts
Thank you. Admittedly, I intentionally worded it to look like one :)

~~~
hayksaakian
Was that with the intent of baiting clicks, or as satire?

~~~
smartial_arts
Neither - I cannot measure how many times it gets clicked, since it's not my
website, and I didn't really mean it in the satirical sense.

I guess I just wanted to see how many points it collects. So far it has
exceeded my wildest expectations, although that is mainly due to the content,
not the title.

~~~
sp332
In the _Explanation_ on Brevity Guild Micro Fighter, you use "i" as the
variable, but the code snippet uses "n". I only found this because I was
having fun following along! :)

~~~
smartial_arts
That's not my comic, unfortunately :)

You'd better let the website owners know.

------
nemetroid
I had Haskell as the subject in my intro course at university (my first
experience with functional programming). I've tinkered a bit with it since
then as well, and I'm at the "somewhat intuitive grasp of monad transformers"
state.

I tried Clojure some week ago through the Clojure koans that were posted here.
Compared to Haskell I found the syntax very obtuse and it was not obvious why
Lisp would be more powerful than Haskell. The bare-bones syntax felt more like
a "proof of concept" than an actual strength.

(of course, the koans took less than a day to do so I'm not dismissing Clojure
because they didn't impress me, but I got the idea that the koans were an
attempt to showcase Clojure's strengths)

~~~
jeffdavis
"Compared to Haskell I found the syntax very obtuse and it was not obvious why
Lisp would be more powerful than Haskell."

Language "power" is ill-defined, and basically means "good".

If you think that homoiconicity is good, then you probably like lisp. If you
think that referential transparency is good, then you probably like haskell.
Those are mutually exclusive, so one language can't really have both. But
either one could be considered "powerful" and either one could be said to help
prevent bugs.

~~~
eggsby
How is it that referential transparency (the ability to swap a reference with
it's value: no hidden inputs) and homoiconicity (both the source and the
resulting syntax tree sharing the same structure) are mutually exclusive?

What homoiconicity offers is the macro system, the ability to operate on the
syntax tree as a regular language data structure.

The difference is a language like Haskell enforces referential transparency
where in a lisp it is up to the developer whether or not a function will be
referentially transparent.

~~~
jeffdavis
"How is it that referential transparency..."

That was incorrect, I meant: "the kinds of metaprogramming associated with
homoiconicity are mutually exclusive with referential transparency".

A macro could not, for instance, take a variable name as an argument and
return a result that's based on the value of that variable and maintain
referential transparency. So that would be a pretty weak macro system.

I suppose there may be other uses for homoiconicity, but I don't know enough
about lisp to comment on that.

~~~
cgag
Maybe I don't understand referential transparency, but why not? `or` is a
macro in Clojure:

(def a 10) (def b 20) (or a b) => 20

Is `or` not referentially transparent?

~~~
jeffdavis
(please excuse my poor knowledge of lisp... check out the wikipedia page if my
explanation falls apart:
[http://en.wikipedia.org/wiki/Referential_transparency_%28com...](http://en.wikipedia.org/wiki/Referential_transparency_%28computer_science%29))

My clojure interpreter gives "10" not "20".

The function "or" is referentially transparent, because if you call it with
the same argument values you are going to get the same result every time. (or
10 20) is the same as 10.

However, the following function is not referentially transparent:

    
    
      (defn incx [] (do (def x (+ x 1)) x))
    

Because subsequent calls return different results:

    
    
      (def x 2)
      (incx)
      (incx)
    

Things like side effects and many kinds of metaprogramming break referential
transparency.

~~~
wonderzombie
It seems to me that what you've demonstrated is that side effects are possible
in Clojure, esp. with sufficiently obfuscated (read: un-idiomatic) code. But
you're working too hard anyway; Clojure explicitly admits mutability already,
through various concurrency pieces like atoms and refs.

That said, if any admission of mutability is sufficient to disqualify a
language from claiming to encourage or support referential transparency, then
Haskell fails the test, too. unsafePerformIO is a trivial example. I'm sure if
you were determined to introduce nondeterminism, you could find a lot more.
But that's not the point, is it?

Maybe there's a way to demonstrate your point, but what you've shown here
doesn't involve homoiconicity or macros at all.

~~~
jeffdavis
I should have known that my poor knowledge of lisp would not be excused on HN
;-)

Anyway, I tried learning enough clojure to prove that routine metaprogramming
would often be not referentially transparent.

First, note from the wikipedia page that "referentially transparent"
essentially means that the same function with the same arguments will always
produce the same result, and that you can call it more (throwing away the
result) or fewer (replacing calls with the result) times without affecting the
meaning of the program.

So, mutating variables (assignment) violates it, as does reading mutable
variables other than the arguments. So, using "def" on a variable that already
has a value is not referentially transparent.

But let me show you something closer to what I had in mind when I said that
referential tranparency and metaprogramming are essentially inconsistent.

Take the simple macro:

    
    
      (defmacro swapargs [x] 
        (list (nth x 0) (nth x 2) (nth x 1)))
    

First, I'll show that the argument that it takes must be the code itself,
rather than the result of evaluating the code. The "or" macro before made it
hard to tell whether it was taking the code as arguments or the result of
evaluating the code as arguments. But with the macro above, it's easy to see:

    
    
      (swapargs (mod 7 5)) => 5
      (swapargs (mod 10 8)) => 8
    

If we are trying to show that swapargs is referentially transparent, we assume
that it will return the same result given the same arguments. Because (mod 7
5) = (mod 10 8), then we know it must not be taking the result as an argument
(because the result of the mod is 2 in both cases, but the result of swapargs
is different). So it's taking the code itself.

Next, we show that give the same code-as-an-argument, it may return different
results in different contexts. That's easy to show:

    
    
      (defn foo [a b] (swapargs (mod a b)))
      (foo 7 5)
      (foo 10 8)
    

Now, we can't replace the call to swapargs with its result, because it changes
depending on the values of "a" and "b" at the time, even though the argument
is always the same code "(mod a b)".

So, this kind of advanced metaprogramming doesn't seem compatible with
referential transparency. Perhaps some subsets are, but I don't even think the
C macro system could be supported in a referentially-transparent way.

I also think that kind of metaprogramming tends to defeat many kinds of static
analysis, such as advanced type systems. I'm less sure of that one, but for
practical purposes now it seems to be true.

So, I think lisp and haskell are close to local maximums for their particular
philosophies, but neither one is any kind of epitome of programming or "more
powerful" than the other.

Personally, I think lisp-style metaprogramming is _very_ cool, and I am happy
I spent a few minutes trying out clojure. However, I don't think it is solving
a problem that is very important to me at a practical level. I am trying to
learn haskell because it is trying to solve the kinds of problems that I
actually have -- mainly software engineering problems (greater confidence in
code, more readability and maintainability). Not sure whether it will help
solve those problems for me, but they are trying very hard to do so, and make
some pretty compelling arguments.

~~~
jrapdx
As presented, the example macro doesn't demonstrate referential transparency,
but then again, the example appears to be incorrect.

The issue is invoking "(swapargs (mod 7 5))". I tried this in the Scheme REPL
(Chicken to be precise), in which the macro was defined:

    
    
      (define-syntax swapargs
        (syntax-rules ()
          ((_ ls) (list (list-ref ls 0) (list-ref ls 2)
                        (list-ref ls 1)))))
    
      (swapargs '(a b c)) => (a c b)
      (swapargs (swapargs '(a b c))) => (a b c)
    

In other words, the macro does show referential transparency.

However, the following doesn't work:

    
    
      (swapargs (modulo 7 5)) => Error: (list-tail) bad   
                                 argument type: 2
    

The problem is the argument is evaluated first, and "2" is not a list. (The
macro requires a list-of-3 argument.)

    
    
      (modulo 7 5) => 2
    

Probably, what was intended was like this:

    
    
      (swapargs '(module 7 5)) => (modulo 5 7)
    

And again:

    
    
      (swapargs (swapargs '(module 7 5))) => (module 7 5)
      (eval (swapargs '(module 7 5))) => 5
      (eval (swapargs (swapargs '(module 7 5)))) => 2
      (eval (swapargs (swapargs '(module 10 8)))) => 2
    

It looks like confusion between literal and evaluable lists prompted the wrong
conclusion, but in this case it's simple to rectify.

Of course, macros in Scheme/Lisp can easily become convoluted and bug-ridden
as much as any code, even aside from arguments about the virtues of "hygienic"
vs. "unhygienic" systems. Properly constructed, macros remain an essential
feature of Lisp/Scheme languages.

BTW, if we're comparing qualities of programming languages, here's a real-life
example showing the particular merit of Scheme. I took on the task of creating
a complex application (a web server supporting multiple hosts) and decided to
write it primarily in Scheme (and some C). The first version was up and
running in less than half a year.

Inevitably, months after the project was deployed changes were necessary.
Despite the length of time since last seen, the code wasn't obscure to me, it
was easy to understand and pick up where I'd left off before. Definitely
different from prior experiences.

The crux is getting a good grasp on its core, macrology perhaps among the
harder parts. But understood, Scheme allows enhanced productivity, as I've
known it more so than other languages "under load" in parallel situations.

~~~
jeffdavis
Thank you for the detailed reply.

When you say my example is incorrect, do you mean that it's incorrect in
clojure, or only in scheme? I tried my examples in clojure and they appear to
work and appear to demonstrate a lack of referential transparency. I assume
that clojure is a valid lisp to make a point about metaprogramming and macros.

Also, it looks like it's fairly easy in scheme to show the same thing, which
it looks like you started to do (I'm not sure whether you agree with me about
that or not):

    
    
      (define-syntax swapargs
        (syntax-rules ()
          ((_ ls) (list (list-ref ls 0) (list-ref ls 2)
                        (list-ref ls 1)))))
      
      (define a 7)
      (define b 5)
      (eval (swapargs '(modulo a b))) => 5
      (define a 10)
      (define b 8)
      (eval (swapargs '(modulo a b))) => 8
    

The two calls to "eval" are identical, yet return different results. That
breaks referential transparency.

"showing the particular merit of Scheme"

From what I know, I like lisps of various flavors. I just said that they
didn't really speak to the kinds of problems that I deal with. Maybe if I
wrote more lisp I would see why it does so, but currently I do not.

~~~
jrapdx

      > The two calls to "eval" are identical, yet return
      > different results. That breaks referential transparency.
    

You are right, I don't know much about the syntax of clojure, but the Scheme
version works as I'd expect. Yes, the eval calls return different results, but
then again, we'd expect to compute a different output for _different_ inputs.

What I tried to show is that calling (swapargs (swapargs '(a b c))) will
always return the original list, that is, demonstrates referential
transparency. In the case of '(modulo a b), result of evaluation returns a the
same result when repeatedly given the same a, b inputs.

The point of the macro was exchanging the second and third elements of the
input list. Naturally for the modulo operation, the order of the inputs is
significant, and exchanging the operands will give the "opposite" remainder as
the result.

In your example the two calls to (eval ...) are _not_ identical and the
"different results" are perfectly correct, without implications for
referential transparency.

Don't know what kind of applications you might have in mind, but of course, no
PL is optimum in all domains. For the kinds of programs I've tackled,
Lisp/Scheme has been a good fit. Or maybe it has to do with the way my brain
works just as much as the purposes I am applying the language to. That
wouldn't surprise me a bit.

------
santu11
I am a intermediate programmer with just over one year experience.

Everywhere I read about the power of lisp and really want to use it. If it is
so good why ain't it is used more?

It is very easy to get sites running using ASP .NET, wordpress, RoR or Django.
I have worked on production sites using the first two. And personally tried on
small projects on the last two.

Is there a way to use Lisp professionally?

~~~
muuh-gnu
> If it is so good why ain't it is used more?

* Weird syntax (for most people).

* No free implementations existed during a key period (80s, 90s) so no initial traction, no useful libraries and killer apps which would pull the whole ecosystem. Implementations didnt even exist for commodity hardware.

* The commercial implementations cost too much, so they suffocated the ecosystem. People preferred coding for free in C or Perl, than paying an arm and a leg for Lisp. So they wrote all the useful libs in C, Perl, Java and Python instead of Lisp.

* No canonical implementation, late and incomplete standardisation, which led to extreme fragmentation, which further killed off the growth of the ecosystem. Instead of writing useful libraries, Lispers wasted effort writing 1001 incompatible implementations of the same basic system.

So to summarize, I'd say the Lisp ecosystem is _still_ suffering the
consequences of the bad strategic decisions made 30-40 years ago.

But it is slowly but steadily healing and improving, especially the last few
years. It has a high-quality free implementation with SBCL [1], consolidated
CPAN-like library management with Quicklisp [2] and a IDE with Emacs-based
SLIME [3]. Everything is getting better.

[1] <http://www.sbcl.org/>

[2] <http://www.quicklisp.org/beta/>

[3] <http://common-lisp.net/project/slime/>

~~~
lazyjones
> * No free implementations existed during a key period (80s, 90s) so no
> initial traction, no useful libraries and killer apps which would pull the
> whole ecosystem. Implementations didnt even exist for commodity hardware.

Emacs LISP (OK, a limited dialect) was available and so was CMUCL (full
implementation), which I believe was used for teaching in 1992 when I first
got in contact with LISP at our uni ...

Also, back then (80's and 90's) most people still paid an arm and a leg for C,
Modula and Pascal on their platforms, so that can't have been an issue. My
take is that LISP implementations were too slow to justify their use for most
people over faster compiled languages. Whether you paid for the language or
not, you expected to be able to get the most out of your hardware.

~~~
chimeracoder
> faster compiled languages

Lisp _is_ a compiled language.

For that matter, it's a damn fast one, too. The Lisp implementation of PCREs
are actually faster than Perl's, by some benchmarks.

I don't want to start a tangent about benchmarks and their relevance, but it's
clear that Lisp performance isn't a limiting factor.

~~~
lazyjones
C is still faster at common tasks and back then, code from readily available C
and Pascal compilers was much faster than CMUCL or ELISP (both had a bytecode
interpreter only AFAIR). My point is that in the 80s and 90s, computers were
much slower and a factor of 2 was a big deal then, especially for professional
developers who had to write well-performing applications, though nowdays a
good language is "fast enough" if it's only half as fast as C.

~~~
lispm
cmucl has native code generation for a VERY long time.

------
S4M
I fail to see how this is going to convince anybody who hasn't tried lisp to
give it a shot. This cartoon can be summarised as "All languages are
accumulating bugs while lisp has some magic X and Y that provide a way around
them."

A blub user will think "yeah whatever", IMHO.

~~~
king_magic
My thoughts exactly, but I'd add this: I love and respect Lisp; I've written
my fair share of it. There is absolutely nothing inherently magical in Lisp
that prevent you from introducing bugs. You can just as easily make a logic
error in your code that results in a bug. Bugs happen. Bugs will happen. There
is no silver bullet. The only way to reduce bugs is to test thoroughly.

tl;dr: nice cartoon, "lisp means no bugs" == total BS, bugs can totally exist
in Lisp code

~~~
S4M
Yeah, I think the link posted here is doing little to make lisp popular,
especially compared for example to a nice tutorial like the Seesaw example one
to build GUIs in Clojure [0]. There, you can see how interactive Clojure is,
and how fast it can be to develop stuff. Note that I took this example because
it's on the top of my mind, but I am sure similar proves of interactivity can
be made for other lisps.

[0] <https://gist.github.com/1441520>

~~~
king_magic
Holy crap, Seesaw looks awesome! See, that's far more powerful - I had no idea
that even existed, and now I'm excited to play around with it. This is great!

I don't need a cartoon to sell me on something. I need to see how something
can be used.

------
ericmoritz
I'm missing something; how does LISP enable bug-free programs?

~~~
benjoffe
Did you click on any of the blue links in the comic? They contain articles on
language features with sections: 'Synopsis', 'How it kills bugs',
'Explanation' and 'Weakness'.

~~~
goostavos
Whoa, hopefully I'm not the only one who missed that. I thought it was just
animated text, not a link.

------
mwexler
Great intro. Any comments on "Land of Lisp" as a book for a "want to learn
Lisp" journey?

~~~
craftman
Excellent book to discover and learn Lisp, maybe the best one actually. It
covers all aspect of Lisp and provides lot of examples code based on games.
This is somehow more fun than the math example of SCIP (even if this is
another great book I deeply appreciate too)

~~~
muraiki
Thanks for this -- I've been thinking about taking up SICP but I'm not that
math-oriented (I do want to change that, though!) I think that I'll pick this
book up first.

------
revjx
Superb. I went on to read the sample chapter, which was equally amusing and
interesting. I think I'm going to order the book, despite the fact I can't
really think of any practical applications for Lisp in my day job - although
who knows.

------
stcredzero
Most of the time, you don't want to save the world because this presents
scaling problems. Instead, save a little corner of the world and be open about
how you are doing it. If you do this right, then you will garner lots of
imitators. Then if your way of "saving the world" is well documented and
robust enough to avoid the "cargo cult" pitfall, you will convince some large
part of the world to save itself.

Note the implication: You don't save the world by telling it, "You're doing it
wrong." You save the world by getting the world to covet your success.

------
dschiptsov
There is less insane way to personal insights -
<http://www.paulgraham.com/onlisptext.html>

------
borplk
Am I the only one having a hard time reading lisp?

Non-functional programs read like plain English. Particularly Python but I
just can't get my head around the functional ones.

~~~
loup-vaillant
Imperative programs read like English. Functional ones read like Apache.

The Main difference is, a functional programs mainly talks about what things
_are_ , not what they _do_. For instance:

    
    
      // C(++)
      int square(int n)
      {
        return n * n;
      }
    
      -- Haskell
      square :: Int -> Int
      square n = n * n
    

The C code reads like a procedure to follow: "return n times n to whoever
called you" (I'm anthropomorphising square(), here). The Haskell code reads
like a description: "the square of n is n times n".

That was the first major difficulty. Now the second one: functional code is
often like reversed imperative code:

    
    
      // C(++)
      int compute(int n)
      {
        int x = foo(n);
        int y = bar(x);
        int z = baz(y);
        return z;
      }
    
      -- Haskell
      compute :: Int -> Int
      compute n = baz (bar (foo n))
    
      -- alternate definition
      compute n = baz . bar . foo $ n
        where g . f = \x -> g (f x) -- function composition
              f $ x = f x           -- function application
    

So, in the Haskell code, you see that the data flows from right to left,
instead of top to bottom. Like Unix pipes, only reversed.

The final difficulty is getting used to the fact that functions are passed
around _directly_. In C++, Java, or Python, we often pass around _objects_ ,
who may or may not hold the same methods than the others that where passed
around in the same way (that's polymorphism). Subtype polymorphism is neat,
but most often, you need it because you want _one_ method to change depending
on various factors. A simpler way to do this is pass around the function
directly.

That leads to some powerful, though uncommon in the imperative world, idioms.
For instance, you can write your own customized loops. Imagine for instance
that you want to process lists in Haskell:

    
    
                                    -- A List is either
      data List a = Empty           -- an empty node,
                  | Cons a (List a) -- or a cons cell,
                                    -- with an element and a list.
    

Now let's process the list

    
    
      inc-all :: List Int -> List Int
      inc-all Empty      = Empty
      inc-all (Cons e l) = Cons (foo e + 1) (inc-all l)
    
      dbl-all :: List Int -> List Int
      dbl-all Empty      = Empty
      dbl-all (Cons e l) = Cons (foo e * e) (inc-all l)
    

See how much they have in common? There's a way to factor out that, with a
map:

    
    
      map (a -> b) -> List a -> list b
      map f Empty      = Empty
      map f (Cons e l) = Cons (f e) (map f l)
    

Note that the first argument is a function, hence the (a -> b) between
parentheses. Using it is very simple:

    
    
      inc-all l = map (λe -> e + 1) l
      dbl-all l = map (λe -> e * e) l
    

And Haskell can make it even more concise, with what we call partial
application. Haskell functions actually have only one argument. Multiple
arguments are simulated by having the function return another function. Here:

    
    
      add :: Int -> (Int -> Int)
      add x = λy -> (x + y)
    

Which is the same as:

    
    
      add :: Int -> Int -> Int
      add x y = x + y
    

Or even

    
    
      add :: Int -> Int -> Int
      add = λx -> (λy -> (x + y))
    

So, inc-all and dbl-all above can be written as:

    
    
      inc-all = map (λe -> e + 1)
      dbl-all = map (λe -> e * e)
    

Without those fundamentals, one doesn't stand a chance at understanding real
world functional code. It's just too different. It's no harder, though. The
main difficulty here is to change your mindset.

Now I haven't talked about macros…

------
cafard
I have no problem believing in insectoid domination of earth. Bug-free
software, Lisp or otherwise, does strain my credulity.

------
mrinterweb
This book seems inspired by _why's poignat guide to ruby book
[http://mislav.uniqpath.com/poignant-
guide/book/chapter-1.htm...](http://mislav.uniqpath.com/poignant-
guide/book/chapter-1.html) Not that that's a bad thing, but I'm just
surprised, no one else has mentioned the same.

------
TomMasz
This looks like a comic from the 70s, photocopied and hanging on the bulletin
board in the computer lab.

------
gclaramunt
Typical Lisp propaganda: many brave non lispers are part of the functional,
brevity, continuation and DSL guilds. And it chooses to ignore that the
biggest battle was won by the type system guild...

------
sek
I thought Lisp is the pinnacle of programming until I discovered Haskell.

~~~
lispm
There is a lot of cool stuff out there. Haskell is one of them.

------
ratonofx
A Fantastic metaphor!

------
zem
apart from the excellent comic, i loved the insight into how laziness helps
fight bugs. i'd never thought about it in those terms before.

------
dsafhkjsdh
who write this crap? i wrote in lisp (scheme), this is the most bugged
language i ever used. and i used more then 5.

this is non debugable language, it it makes it bug full. you have to follow
complex ideas, and keep scores of ideas, this is not for normal humans.

when you pass the wrong type and not support it, it goes to hell, and as a
programmer you start to flame.

list, is a piece of shit for the masses, its useful for handful, who didn't
distribute the ideas well, and people when to the place where it was easier to
write code, and less is needed for getting your path going.

~~~
slurry
I enjoy Lisp, but I got to agree with you about scheme debugging. MIT/GNU
scheme has to be most unhelpful interpreter I've ever used. Error messages are
loud, completely unhelpful, and by the end of it I was convinced the REPL was
actively trying to make me feel stupid.

~~~
spacemanaki
Did you read the manual? [http://www.gnu.org/software/mit-
scheme/documentation/mit-sch...](http://www.gnu.org/software/mit-
scheme/documentation/mit-scheme-user/Command_002dLine-
Debugger.html#Command_002dLine-Debugger)

It's not as easy as setting breakpoints in an IDE like Eclipse and using a
visual debugger there, but it's definitely a workable command-line debugger.
You just need a little patience in order learn how to use it, and also
probably have a pretty good understanding of Scheme's execution model, but
that would be true for debugging any programming language.

~~~
slurry
> Did you read the manual?

For real?

> It's not as easy as setting breakpoints in an IDE

I don't use an IDE. I'm comparing against command line tools for scripting
languages, Haskell and Common Lisp. Some fail more helpfully than others.

~~~
spacemanaki
Sorry for my tone, I guess YMMV... I just never had any issue with MIT
Scheme's debugger, although I admit that CL with SLIME is a bit more useful,
especially compared to MIT's weird Emacs-clone (or -fork?) "Edwin".

~~~
tbirdz
For a slime like repl for scheme, you might want to check out geiser:
<http://www.nongnu.org/geiser/>

It works great for guile and racket.

