
Why Lisp is a Big Hack (And Haskell is Doomed to Succeed) - yewweitan
http://axisofeval.blogspot.com/2011/01/why-lisp-is-big-hack-and-haskell-is.html
======
raganwald
Let me see if I understand this: The argument is that a hypothetical language
descending from Haskell will be (a) superior and (b) more popular than Lisp as
we know it today and to any Lisp that might evolve from it.

I think there's too much hand-waving going on to agree or disagree with the
premise. Language popularity is notoriously fickle and driven by
considerations that don't even earn a footnote in the OP.

For example, one of the most popular languages in the world today is
Javascript, a language that owes a little of its heritage to Lisp and none at
all to Haskell. Why is it eating Lisp's lunch and Haskell's lunch today?
Because it ships as the default runtime for the most popular application in
the world, the web browser.

Another language that has the same breadth of reach is PHP. Again, its
popularity is due to the fact that it is a default runtime for most of the
world's hosted web server platforms.

If we are to guess which language is going to gorge itself on everyone else's
lunches in the future, I would suggest we spend our time thinking about which
platform is going to be insanely popular and what new language might be its
default runtime.

I'm not providing any fresh insight, of course:

 _Let's start by acknowledging one external factor that does affect the
popularity of a programming language. To become popular, a programming
language has to be the scripting language of a popular system. Fortran and
Cobol were the scripting languages of early IBM mainframes. C was the
scripting language of Unix, and so, later, was Perl. Tcl is the scripting
language of Tk. Java and Javascript are intended to be the scripting languages
of web browsers._

<http://www.paulgraham.com/popular.html>

~~~
pwpwp
I was only talking about (a), the language being "superior" in a very
"idealistic" and fundamental sense - i.e. being "more expressive".

I do believe that the son-of-FP I hand-waved about will also attain (b), being
the most popular language, but that's much further out - say 50 years.

~~~
raganwald
It could happen sooner: If there are enough posts like this, someone may be
developing a programmable platform of some sort and think "I know! I'll borrow
some ideas from Haskell for the DSL!!"

That's all it takes.

------
dexen
This is the first time I feel PG's `The Hundred-Year Language' may have answer
different than Lisp. On the other hand, `a witty blogpost proves nothing', to
misquote somebody.

~~~
pwpwp
The great thing is: if and when Haskell ever evolves as far as described in
the post, we'll have Lisp again - inside "Haskell". ;)

~~~
andreyf
No. Haskell has type safety which was never intended to be included in Lisp.
On the other hand, Lisp has the s-expression syntax which allows for usable
compile-time macros. Neither language is a subset of the other.

How useful type checking and macros are in creating real-world software is not
a question I've seen answered well. Smalltalk-style languages (Python, Ruby)
don't really practice either.

~~~
pmarin
"Haskell has type safety which was never intended to be included in Lisp."

Racket has a dialect with static types. <http://docs.racket-lang.org/ts-
guide/>

------
sfvisser
As a Haskell programmer I never feel the need to add any form of dynamic
typing. In practice dynamic typing makes it harder to build programs, not
easier.

I'm the first one to admit that Haskell's type system has a somewhat steep
learning curve. But after a while, when you internalize it and learn to
pattern match on GHC's type errors, day-to-day programming will become easier.

~~~
lelele
> In practice dynamic typing makes it harder to build programs, not easier.

OTOH, static typing could make writing some kinds of programs impossible. In
an interview, Joe Armstrong of Erlang fame said that they tried hard to put a
type-system on top of Erlang, but they have not been able to.

IMO, static-typing should be optional, that is: you should be able to run a
static-typing checker on your programs, without being constrained by it at
every step.

~~~
hesselink
> OTOH, static typing could make writing some kinds of programs impossible.

Often, these turn out to be incorrect programs or programs without a good
structure/architecture. It is my experience that static typing makes it
slightly harder to write the first version of a program, but subsequent
alterations become an order of magnitude easier. For me, this is worth it,
since alteration and maintenance are the largest part of most program's life
cycles.

~~~
lelele
> Often, these turn out to be incorrect programs or programs without a good
> structure/architecture.

Then this critique would apply to most - all? - Erlang programs too ;-)

------
lispm
Last time I looked Haskell and Lisp were totally different languages, with
different communities, different technologies, different application areas,
...

Why should this change?

------
kunjaan
If he favors type safety and switching between modules of different type
systems, Racket already has that, doesn't it?

~~~
pwpwp
Good point - I forgot to consider this approach. I'll have come back to it at
another point.

Generally though, I think Haskell's approach of re-adding freedoms is more
fruitful than trying to add bondage to Lisp. And "the Haskell movement"
certainly has a different kind of manpower behind it.

~~~
kunjaan
>And "the Haskell movement" certainly has a different kind of manpower behind
it.

Can you clarify what you mean by the words "movement" and "different kind of
manpower"?

How is progress in languages like Racket different from Haskell?

Why do you think that adding "interesting" type systems in Lisp would be
"heroic" and less "fruitful"?

~~~
pwpwp
There are probably hundreds, if not thousands of researchers, who use Haskell
(and related PLs) as their vehicle for type system research. People working on
"gradual" type systems like Typed Racket are probably around a dozen, which
means that progress in Haskell happens much faster and with more breadth.

~~~
Raphael_Amiard
While i think you make valid points, i do not agree with the conclusion.

First, i don't think associating innovativity with numbers of researchers is a
safe predicate since it's clearly not linear.

Second, the fact that haskell is a testbed for researchers doesn't make it the
ideal language for "real world programming" in my opinion. It could be that it
is, but again the correlation is less than clear.

And third, and that's the most important point in my opinion, the ideas
developped in haskell are fully available to other language implementors once
they exist, and the experience of researchers is available. It means that a
language like typed racket or any other could possibly implement the best/most
usefull of those ideas in much less time than was needed to first find them.

~~~
thesz
>Second, the fact that haskell is a testbed for researchers doesn't make it
the ideal language for "real world programming" in my opinion.

Once upon a time Lisp was a language for researchers.

~~~
kunjaan
It still is.

------
jaekwon

       Of course you have to subscribe to the idea that this
       safety and verification is something that's good and
       superior. I do.
    

Why is it superior? I've always believed that a superior language is one that
is as close to natural language as possible. Natural language doesn't have
strict type checking, and a great many things were told with 'incorrect'
grammar.

~~~
merijnv
To take as an example Epigram (also mentioned in the blog post). Epigram is
not Turing complete, as such the language can be made strongly terminating.
This means all programs are guaranteed to terminate (no Halting problem)
programs are also strongly (and statically typed) meaning that when your
program compiles it will be guaranteed to behave correctly (barring hardware
malfunction). Essentially this means we have made the large majority of bugs
impossible to be made in this language.

Of course creating a trivial language that is strongly terminating and
statically typed is pretty easy. What Epigram is attempting to do is trying to
add enough flexibility to create all programs we care about. Not being Turing
complete we can't, by definition, create all programs. Of course if we can
make all programs we want to make, the inability to create _all_ programs is
irrelevant.

Now you might not think these benefits make a language superior to other
languages, but I (and the blog post writer as well) do. As he mentions in post
adding the ability to lessen these guarantees (i.e. make the languages more
dynamic on request) then we have all upsides (less bugs!) and no downsides.
Since we can adjust our guarantees down to be less strict and provide more
freedom as requested. Lisp _can't_ do the same in reverse as adding the same
static guarantees to Lisp is a beyond Herculean task.

~~~
dexen
If the Epigram is not Turing complete, then what kind of programs cannot be
expressed in it? Is the aim of the developer to add TC at some point, or is it
a deliberate design choice to remain non-TC?

~~~
merijnv
EDIT: HN ate my asterisks

Disclaimer: This post contains broken pseudo syntax and pseudo type theory,
feel free to make corrections if you are smarter then me.

Epigram (as of yet) is not truly aimed at being a practical language in the
hacker sense. It is a research vehicle trying to establish a practical type
theory for programming in (i.e. trying to find the sweet spot of how much
freedom we require to write useful programs). My knowledge of the underlying
theory is still somewhat limited so I don't dare say which programs cannot be
written in it.

I would say the lack of Turing completeness is a deliberate design choice.
Turing completeness and strong termination are mutually exclusive. Introducing
Turing completeness automatically means introducing the Halting problem.

As mentioned in the blog post (one of) the theories behind Epigram is
"dependent typing". I agree with the writer that this will be the most
exciting thing in programing since...well, ever. What does dependent typing
mean? It means the type of a function can depend on the values of the input.
To abstract? Let's see if I can clarify:

Haskell currently supports types depending on types (note a lot of this won't
be actual valid Haskell, but use pseudo-Haskell syntax), a classical example
being the polymorphic list:

    
    
        data List a = Nil | Cons a (List a)
    

This is a type which depends on another type. We have "List a" for any given
type a. Canonically the type of a type is called * . That is 1:Int meaning "1
has type Int" and Int:* meaning "Int has type * ". Then what is the type of
List? It is of course "forall a : * . * " which you could interpret as a
function taking a type and returning a new type List :: * -> * .

However, this is not flexible enough to describe somethings we want to be able
to describe. For example in Haskell we have the function "head" returning the
first item in a list:

    
    
        head :: [a] -> a
        head (x:xs) = x
    

Now, this function has a problem. What to do when we call head with the empty
list as argument? Haskell right now throws an exception, but the compiler
should know how long a given list is and it should know that we _cannot_ call
head on a list of length 0.

Whatever shall we do? We could painstakingly verify we never call head on a
list of length 0 by hand. But we programmers are lazy and hate doing these
things. We could waste our lives writing TDD tests with 100% coverage to
ensure we never accidentally call head with an empty list, but as lazy
programmers we are also to lazy to do this. If only we could make the type of
a list depend on its value, thus encoding in the type system whether calling
"head" with any given list was safe or not.

Dependent types to the rescue! As mentioned earlier with dependent types the
type of a list can depend on a value. Haskell as yet does not support this
(partial lie, Haskell has some limited dependent typing support). In Haskell I
could not say:

    
    
        data List a n = Empty | Cons a (List a n-1)
    

However, in a dependently typed language such a definition is in fact
possible. The type of List would then be something like List :: * -> Int -> *
. We can then redefine head to have a type that only accepts lists of type
"List a n" where n > 0\. This means that passing a list which potentially has
n == 0 is a type error.

The compiler will at compile time verify that in your code it is _never_
possible that you pass a list with length 0 to head. If it is possible this is
a type error and your program will not compile, if it is not possible we have
no effectively eliminated the possibility of an empty list exception form our
program, without any tedious manual testing. Yay for science!

~~~
jaekwon
For any given Haskell Compiler I would think that I can construct a program
where the Haskell Compiler won't be able to determine, in polynomial time,
whether the list has length 0 or not.

~~~
eru
Have you read the post your are commenting on? Epigram does not even try to be
a complete language.

------
ramanujan
What we really need is for some people to turbocharge ghcjs
(<http://news.ycombinator.com/item?id=1818820>), so that we can get optional
static type checking in Javascript.

Haskell is great at parsing[1] and great at type checking. So how about it
parse the relatively simple JS grammar[2] and do some static checking on JS
code?

You could augment existing JS code with an optional accepts/return syntax to
provide type annotations:

    
    
      foo = function(x) { return [x+1,x+2]}
      foo.accepts = Numeric
      foo.returns = [Numeric, Numeric]
    

This is similar to Collin Winter's typecheck for python
(<http://pypi.python.org/pypi/typecheck/>), which has some nontrivial bugs but
which is conceptually interesting.

[1] You could use Matt Might's new derivative parser if you want to have some
fun while doing this (he has a Haskell implementation to boot!)

<http://matt.might.net/articles/parsing-with-derivatives/>

[2] Real World Haskell has a good JSON example that should be easily
extensible to the full syntax: [http://book.realworldhaskell.org/read/writing-
a-library-work...](http://book.realworldhaskell.org/read/writing-a-library-
working-with-json-data.html)

EDIT:

Doctor JS by Mozilla is a (very) good start here -- maybe it already does
everything we need...

<http://doctorjs.org>

------
Confusion

      Greenspun's Tenth Rule of Programming
      Any sufficiently complicated C or Fortran program
      contains an ad-hoc, informally-specified bug-ridden slow 
      implementation of half of Common Lisp.
    

Replace C by Lisp and CL by Haskell? Rinse, lather, repeat every few decades?

~~~
gsivil
Can you please give us some historical examples where this quote could be
applied? This would be interesting to know.

~~~
abrahamsen
AutoCAD and Emacs would be literal examples of "sufficiently advanced" C
programs by this definition.

My guess is that it is intended to cover any C or Fortran program that
includes an ad-hoc interpreter, an ad-hoc dynamic type system, or an ad-hoc
garbage collector. That would include e.g. sendmail (at least an ad-hoc
interpreter) and GCC (all three).

My guess is that most C programs (I don't know about Fortran) we would
intuitively call "complex" contains one of the three technologies, making it
harder to come up with a clear counter-example. Some might avoid the "ad-hoc"
part by linking with a generic extension language like tcl or lua. Or even
Common Lisp.

~~~
wisty
Well, almost any big traditional application (not really web apps) eventually
grows a scripting language for extensions and / or user macros. Basic (in
Office), Javascript and Java (in web browsers), Lisp (in the above
applications, and probably many more), Lua (in a lot of games), AppleScript,
that C thingy in Quake ...

Lisp is relatively easy to implement, powerful, and intuitive to non-
programmers (compared to OOP - Python, Perl, and Ruby are also easy but
nowhere near as old); so it's a common choice.

~~~
ckwop
I didn't believe this "Easy to implement" line but over Christmas I wrote my
own LISP implementation on top of .NET and it comes to about 350 lines codes.

That includes all the list operations, an arbitrary precision number type and
a host of the usual built in function (add, subtract, multiply, divide, and
not or etc).

Eye-opening to say the least!

------
voxcogitatio
I'm not sure that Haskell could, given time, subsume dynamically typed
programming. But even if it is possible, why would it be desirable? Most
Haskell people don't think very highly of dynamic typing, and the extensions
to the language reflect that. They generally add more typing, e.g GADT's,
multi-parameter type classes, extensible kinds e.t.c. And dependent typing is
certainly not a move towards dynamic typing! Casting it as such shows poor
understanding of both concepts.

------
kleiba
There's always a trade-off between the expressablity of a programming language
and its complexity. I think Lisp may (still) have a slight advantage when it
comes to that balance, i.e., it is more accessible for novices. And adding
more things to an already complex language like Haskell doesn't sound like
changing that balance in the latter's favor.

