
Not Lisp again (2009) - coldtea
http://funcall.blogspot.sg/2009/03/not-lisp-again.html
======
goldmab
This is a fun demo of functional programming, but Lisp isn't so special
anymore. In Python:

    
    
      >>> def deriv(f):
      ...   dx = 0.0001
      ...   def fp(x):
      ...     return (f(x + dx) - f(x)) / dx
      ...   return fp
      ...
      >>> cube = lambda x: x**3
      >>> deriv(cube)(2.0)
      12.000600010022566

~~~
city41
This is my hang up with Lisps over and over again. I read through SICP and did
almost all of the exercises. I've played with Clojure and done some small
projects in it. I've recently started dabbling in ClojureScript. And I just
can't seem to get to the point where Lisp becomes readable and quickly
parsable by my brain. That snippet of Python you wrote is _immensely_ more
readable than the Lisp in the article. The equivalent snippet in Ruby,
JavaScript, C#, Haskell, whatever, would also be immensely more readable than
Lisps.

Does it just take some serious perseverance? Or am I just Lisp-dumb?

~~~
eksith
Nope, you're not. Readability is crucial for those of us not wired the same
way as those who seem to be more comfortable in Lisp. Can you get a lot done
in Lisp? Sure, if you can scan through and make sense of it. I've tried for a
long time, almost to the point of barging in on my work time, but I haven't
been able to get comfortable. The innumerable parentheses, for example, make
perfect sense to those with the Lisp brain, but since I'm reading it in the
context of English, where too many of those are frowned upon, it's nearly
impossible for me to follow quickly.

Whenever I bring this up among my colleagues, I get shot down by the
proficient folks. The argument then turns into how I'm deficient somehow or
haven't tried hard enough -- the word "stupid" came up a few times -- and that
just completely turned me off the language.

Religion is a touchy subject.

~~~
alberich
Just an observation about the parentheses... Although it can be hard to make
sense of the large number of parentheses, using good indentation (emacs does
it for you) helps a lot.

Of course Python makes it easier to figure out what code belongs where, it was
made with this goal in mind after all.

I believe it is a tradeoff, Lisp's syntax gives you flexibility and an easy
(in terms) and powerfull macro system at the expense of code readability (when
compared with some languages).

~~~
TylerE
If you're using indentation anyway, why not allow indentation to IMPLY
parenthesis.

so that say:

    
    
        foo 1 2 3
        bar 1 2
        if (eq qux 3)
          fizzbuzz 7
    

is translated by the parser to:

    
    
        (foo 1 2 3)
        (bar 1 2)
        (if (eq qux 3)
           (fizzbuzz 7))

~~~
spacemanaki
It's been proposed a few times, and some people have gone pretty far with it,
<http://www.dwheeler.com/readable/sweet-expressions.html>

It's never really caught on though, I would guess due to inertia. Most Lisp
programmers just get used the parens, and maybe those that can't end up not
using Lisp.

~~~
rapind
As a non-lisper reading over some of those examples I definitely prefer his
"Sweet-expression". Seems much more approachable.

Once I make a serious foray into Lispland maybe that'll change...

~~~
gknoy
Change your IDE's color scheme so that parens are rendered in a low-contrast
font color. The parens will be less-obvious.

Emacs' automatic indentation to The Proper Place is tremendously useful. I
find Lisp's indentation to be just as easy as Python's, for example -- likely
as I have used a Lisp for six years before coming to Python.

Once you've used it for a while, you don't think of it as much different from
using {} for control blocks, () for method calls, and [] for array indexing in
other languages. For me, the parens "go away", in that I follow the program
control flow more by indentation than by counting parens.

------
thecombjelly
> First of all, a disclaimer: standard R5RS Scheme can not be executed
> efficiently. Period. Every implementation must bend the semantics of the
> language in more or less subtle ways to do a minimum of optimization and
> there are several approaches to do this[0]

In my experience, lisp developers care a ton about performance. Most common
lisp and schemes provide numerous ways to get good performance. With lisp you
get power, flexibility, and performance. It gets even better when you use
something like chicken scheme, which is compiled to C and allows you to embed
C code.

I run multiple web servers written in chicken scheme and they perform very
fast, much faster than nearly any comparable web framework, like rails.

[0] From a chicken scheme guide on programming for performance:
<http://wiki.call-cc.org/programming-for-performance>

~~~
emiljbs
I've heard that the Stalin compiler is very good, I don't know much about it
though.

~~~
gsg
Stalin compiles a subset of Scheme.

------
skore
Well, gee, thanks for reminding me AGAIN that I have this itch for diving into
LISP that never subsides, no matter how strongly I put it off as overly
academic or otherwise removed from reality. Upvoted.

~~~
spacemanaki
In case you don't know about this and you want to scratch your itch... you can
experience almost the same class this blogger did, in all its glory, for free.
Video lectures with the same professors and authors of the famous book.

<http://mitpress.mit.edu/sicp/>

~~~
douglasisshiny
Maybe I'm an idiot, but I don't see a link for video lectures on that page.

Edit: But they are here: [http://ocw.mit.edu/courses/electrical-engineering-
and-comput...](http://ocw.mit.edu/courses/electrical-engineering-and-computer-
science/6-001-structure-and-interpretation-of-computer-programs-
spring-2005/video-lectures/)

~~~
spacemanaki
Oops you're right, thanks.

------
ufo
Its a bit redundant with the article, but I think everyone should at least
check out the Lambda the Ultimate papers written by the guys in the MIT Scheme
group. In particular the "GOTO" one amazes me a bit because even today the
"expensive procedure call myth" still exists somewhat.

<http://library.readscheme.org/page1.html>

------
colanderman
The Mercury language (<http://www.mercury.csse.unimelb.edu.au/>) does one
better. It can use the fact that multiplication is associative to
automatically transform the non-tail-recursive function into the tail-
recursive form.

~~~
millstone
For that matter, so can gcc.

~~~
colanderman
Wow, I didn't know this. It's such a great feature, they really should
advertise it more.

(Examining assembler output shows that a naïve implementation of fact() also
gets the autovectorization treatment as well as a fair bit of loop unrolling.
Very impressive.)

------
emiljbs
Fortunately there're a lot of these 'magical' languages nowadays, some that
comes to mind is Haskell and Shen.

And yeah, functional feels much cooler than imperative, who would've thought
that the teenagers were right? Being lazy and doing everything at the last
minute _is_ cool.

~~~
betterunix
...would it be wrong to point out the Shen is a Lisp too?

------
craigching
"As soon as the lecture ended I ran to the Scheme lab. Hewlett Packard had
donated several dozen HP 9836 Chipmunks to MIT. Each one was equiped with a
blindingly fast 8-MHz 68000 processor."

That brought back a lot of memories ;) We had similar machines (I'm sure they
were newer as this was 1983 and my class was circa 1990) at the U of Mn where
we did Motorola 68000 assembly and Scheme :) They were HP of some kind and I
remember them!

------
gnosis
The blog post mentions Lisp, but it should be pointed out that what they're
using in that course is Scheme, which should not be confused with Common Lisp.

~~~
wglb
Both Scheme and Common Lisp are dialects of Lisp.

~~~
pavelludiq
Which means little. Scheme and CL have been going in very different directions
pretty much since the beginning. Idiomatic Scheme code and idiomatic CL code
are very different.

------
cemerick
gnosis mentioned that the mentioned code is scheme. FWIW, a Clojure
transliteration of the derivation example is pretty trivial if that's the sort
of REPL you happen to have close at hand:

    
    
        => (def dx 0.0001)
        #'user/dx
        => (defn deriv
             [f]
             (fn [x]
               (/ (- (f (+ x dx)) (f x))
                 dx)))
        #'user/deriv
        => (defn cube [x] (* x x x))
        #'user/cube
        => (def cube-deriv (deriv cube))
        #'user/cube-deriv
        => (cube-deriv 2)
        12.000600010022566
        => (cube-deriv 3)
        27.00090001006572
        => (cube-deriv 4)
        48.00120000993502

------
melipone
Lisp is "cognitively" different from, say, c, Java, and python. It's like
learning a new language. How long will it take you to be fluent in Arabic? It
takes 5 years. But the best part is that after 5 years, you won't forget it.

I've managed to work for 10 years in Lisp. Then, I had to work in Java. After
another 10 years of Java, I took up Clojure very easily.

------
pmelendez
"who cares about something as mundane as that? I want to do magic." I like how
the word "magic" keeps poping up on each Lisp-related article I read. And the
article don't even talk about macros!

~~~
willismichael
"Any sufficiently advanced technology is indistinguishable from magic" -Arthur
C Clarke

Whenever "magic" is mentioned in articles like these, you can probably safely
substitute "higher-order abstractions that aren't available in the language(s)
I'm accustomed to". Of course it's easier (and more fun) to say "magic".

~~~
bloaf
The best use of "magic" I've seen came from a kids TV show where the
protagonist would simply refer to new or mysterious things as "magic" even if
the other characters offered him an explanation of how the thing worked. To me
it was a perfect example of black-box abstraction.

Its magic because we know _what_ it is doing, but the _how_ is not intuitively
obvious.

~~~
gknoy
And, as long as we know how to invoke it reliably and properly, we don't
__need__ to understand how. (Though, being nerds, we often want to.) I agree:
"Magic" is one of the best shorthands for "We can explain that later but for
now let's assume this works".

------
mghook
I would just like to point out that the example is not doing calculus, it is
not taking a function x^3 and returning a function 3x^2 it is using a
repetitive process to narrow in an approximation of the result. It is a
generally applicable method for any value that narrows in on something. The
same thing can be done in C with function pointers, whilst a great algorithm
it is not call to functional programming. I love functional programming and
wish people would stop being entranced by maths examples.

------
Uncompetative
LISP is S-expressions.

Mathematica is M-expressions, kinda...

Macros are considered harmful as they can lead to unmaintainable code.

~~~
demetrius
It’s not macros that are harmful, it’s abuse of macros.

------
arocks
The iterative version's magic can be explained by Tail Recursion Elimination.
Python creator Guido's post on why he left it out of Python might be an
interesting read: [http://neopythonic.blogspot.in/2009/04/tail-recursion-
elimin...](http://neopythonic.blogspot.in/2009/04/tail-recursion-
elimination.html)

~~~
bitwize
Guido mentions that you can't get an accurate stack trace in the presence of
tail-call optimization. That alone makes it an absolute no-go for just about
ANY production code (where ease of debugging is orders of magnitude more
important than ease of writing in the first place).

~~~
raganwald
That makes it sound like a. you can never get an accurate stack trace, and 2.
the stack traces you do get that are "inaccurate" are worthless.

I can't speak to every optimizing interpreter or compiler, but in the few that
I've used, TCO doesn't do anything to code that doesn't have tail calls, so
lots of the code has "accurate" stack traces.

And when it does perturb the stack trace, it does so in a very obvious way, it
abbreviates the tail calls. Such stack traces no longer have a 1:1 mapping
with the function "calls," but are still quite informative for debugging
purposes. Not as informative as they would be if you turn TCO off just to
debug that code, but informative enough that I rarely had to turn it off.

~~~
snprbob86
Never mind the fact that tail cails are just fundamentally loops, which don't
generate any stack frames anyway!

If you're application is complex enough that inspection won't reveal the
source of the bug, then stack traces are almost strictly less useful than
logging.

~~~
millstone
Tail calls can be used to increase the efficiency of loops simulated using
recursion. However, not all tail calls are loops, e.g. void foo() { bar(); }

A good backtrace for a crash or exception can tell me the entire call stack.
Logging at that granularity (i.e. every time a function is called) is not a
good idea.

