
Why Lisp is a Big Hack and Haskell is Doomed to Succeed (2011) - mmphosis
http://axisofeval.blogspot.ca/2011/01/why-lisp-is-big-hack-and-haskell-is.html
======
jes5199
For a language that prioritizes "safety" above all things, there is an awful
lot of flying blind and dangerously in Haskell. It's so, so easy to write
Haskell code that's safe until you change something distant in the system,
which changes when things get lazily-evaluated, and now you have a very
serious resource leak. And because of the IO restrictions, you aren't likely
to put logging in your code - and if you _do_ , the logs will themselves
change the lazy evaluation behavior of your code. I've seen Haskell programs
that stop crashing when you pass in --debug !

If the Haskell environment was more like a virtual machine - like in Java -
where you could connect into a side-channel and see what types of data were
persisting in memory as the program ran - you'd at least have a chance of
debugging this sort of thing. But instead it compiles to machine binaries.

There doesn't seem to be any interest in the Haskell community in making tools
to deal with this sort of thing - they say "you should learn not to make
resource-leaking code". Which is the same thing the Lisp hackers say - "just
learn not to make type errors".

~~~
hibikir
And that's why the 'war' has just moved to the JVM: Closure is a Lisp, Scala
steals a whole lot of things for Haskell, trying to make it actually
practical.

I think Haskell's ultimate role is a bit like Ruby's: It can't really win, but
it's destined to be influential. That's a much harder road for Lisp, as its
greatest strengths are also its greatest flaws.

~~~
cordite
Doesn't scala suffer from the JVM constraints, like no tail call recursion,
everything is an object, such that the compile times become enormous to work
around them?

Haskell is indeed influencing, but it is not remaining stagnant.

~~~
jonesetc
The way I understand it, is that Scala does support tail-call recursion by way
of compiling to a loop. Compilation was definitely a pain when I experimented
with Sit.

~~~
cordite
I recall that scala uses "trampolines" by compiling to what is more like a
state machine.

~~~
judk
I thought Actors does that, but not Scala in general.(?)

~~~
cordite
Rúnar Bjarnason talked about it in his presentation "FP Programming is
Terrible" [0] (he makes up for that title in the end)

I asked him about it on twitter since it reminded me of thunks, he said " a
trampoline executes a sequence of thunks, yeah" [1]

[0]:
[https://www.youtube.com/watch?v=hzf3hTUKk8U](https://www.youtube.com/watch?v=hzf3hTUKk8U)
[1]:
[https://twitter.com/runarorama/status/449070763421618176](https://twitter.com/runarorama/status/449070763421618176)

------
lukeqsee
I cannot speak for Lisp, but I am in the throes of writing a compiler from
scratch in Haskell (while concurrently learning Haskell). I feel/felt very
much as this article described—constrained. And then I learned ETA-reduces.
Then I learned monads. And then I learned … The list just keeps going.

As I slowly learned the language the proper way, I am now able to do anything
I did in a imperative/OOP/whatever language, but now I have that foundation of
type safety. Like many will say, if it compiles, it most likely just works.
Coupling this with automatic checking via GHC-mod and I am just as performant
(in terms of writing code) and encounter half as many bugs as any other
language I've ever used. Haskell isn't a panacea, but it's a very good
language.

~~~
axman6
> if it compiles, it most likely just works

I believe the more you learn, the more this becomes true as well. To begin
with, you'll write a lot of code that has potential bugs (like missing cases
when pattern matching, using types that you really shouldn't, using error when
something like Maybe or Either would be more flexible etc.). eventually you
realise how to write code that can avoid a lot of these problems.

------
avmich
An interesting, to me, capacity of Lisp is to be represented uniformly - so
called homoiconicity. For example, having a Lisp program, you can relatively
easily add - statically - a debug statement after each statement, and then use
that resulting program instead. That would be harder to do in Haskell -
because the Haskell syntax is much richer.

Another doubt to "eating all languages lunches" comes from having multiple
different paradigms in programming languages. I can imagine Haskell eating,
say, Prolog's lunch - for example, Norvig has shown how Prolog could be
implemented as embedded in Lisp. But I suspect it's going to be harder to
repeat in Haskell the strong points of Forth (stack computations), Tcl
(strings as universal media?) or J (composability of primitives), even though
some approximations could be made.

It's fishy idea to search for a singular "perfect" language - unless that's
something like English, with all its imperfections built-in.

~~~
iopq
The singular "perfect" language would have to have pluggable syntax - so that
anyone can have their perfect syntax as long as it generates the same parse
tree.

At the same time, its type system will be pluggable too, so that people can
keep improving the type system (dependent types, dynamic types, etc.) without
changing the language.

Oh yeah, and of course its code generation/execution model will be pluggable
as well. Do you want to interpret it? Sure! Do you want to compile it so it
performs better on your target computers? No problem! Compile it to
JavaScript? Why not?

In that way everyone can be programming in the same language that's flexible
enough for everyone's use, but at the same time can contain DSLs. Pluggable
syntax/custom type checking means you can embed SQL code and make sure it's
valid before running it. It also means SQL could be a custom SQLStatement data
type with its own semantics.

In short, only a language that allows every type of programming can be the
"one true language"

~~~
taeric
Doesn't this just describe lisp, though? Most of your questions have a dialect
or library set that is specifically made to address them.

~~~
iopq
No, lisp doesn't allow you to specify your own syntax. You can't just start
writing stuff without parens and define what that parses to later.

~~~
taeric
You'd be somewhat surprised on that point. Especially if you just take it such
that everything is just trying to build up an s-expression. While the
homoiconicity of the language is incredibly cool for macros and whatnot, I
don't think you strictly need it. Especially not at the top level. (That is,
if you made a language that "compiled" down to s-expressions, what is
missing?)

At the extreme end, take a look at Dylan.

Though, I was really referring to your other points. It seemed every one of
your "questions" is directly addressed.

------
ksikka
Your logic and reasoning seems to be all correct except for one thing...
"Better languages" don't necessary succeed. I mean, just look at PHP and
Javascript... Would you say that they were doomed to succeed because of their
language features? There's more to "language success" than the technical
characteristics of the language.

~~~
skybrian
Yes, PHP and JavaScript show that deployment is everything. A language that
works well with your target platform is better than one that doesn't.

(With PHP, it was apparently a bit subtle; ISP's can host it easily and it
works well with live updates via ftp, and that beat out language features.)

Objective C is a third example: It's popular because iPhone.

------
bbradley406
The author raises valid points about Haskell, but I disagree with his
statement about a static-typed lisp. Typed Racket is useful and enjoyable to
program with, and includes a handy optimization coach.

------
pron
Using the type system to create less buggy code beyond what's done in
mainstream statically typed languages (that are not types-all-the-way-down) is
an interesting, but still very-much -open research question. The type system,
beside being expressive, would need to be easy to understand and debug and
require significantly less effort to wield than debugging less-typed code.

On a related note, I'm watching with interest how the new Java 8's pluggable
type systems[1][2] would play out (I understand the project is expected to
have a big released April 1st). Those are pluggable intersection types, that
can be inferred and injected to legacy code that was written without them.

[1]
[http://docs.oracle.com/javase/tutorial/java/annotations/type...](http://docs.oracle.com/javase/tutorial/java/annotations/type_annotations.html)

[2] [http://types.cs.washington.edu/checker-
framework/](http://types.cs.washington.edu/checker-framework/)

------
kristianp
Original hn discussion:

[https://news.ycombinator.com/item?id=2062436](https://news.ycombinator.com/item?id=2062436)

------
systems
i would argue that clojure is very successful i think (probably more like a
statment of obvious) language success depends equally on the libraries
available for it

clojure have a lot of libraries, and leiningen

and echo the "worse is better" slogan i think that languages that offer a
little more feature above the current mainstream languages, will success more
than languages that offer a lot more

most programmers are doers, they prefer to spend more time doing rather than
learning

languages that are too smart ... are less likely to succeed not until the day
... they become only a little bit better than the mainstream

we move slowly from c to c++ to java to ruby ... the next big language is one
that is only a little bit better than ruby ... not a lot better

i think clojure fit the bill

~~~
deadghost
If clojure is only a little bit better than ruby, is there a language you
think is a lot better?

~~~
systems
well, i guess ... having a second thought about it, clojure is very different
from ruby, being a lisp

i am sure most of the ideas in clojure wont be alien to most rubyist ... but
still its a fairly large departure from ruby

a language that is only one or few steps above ruby, will have to use closer
syntax ... be more or less focused on OO rather than functional programming

------
ggchappell
Can someone explain why anyone would think that dynamic typing is "clearly-on-
the-horizon" for Haskell? (Not saying it isn't; just wondering.)

Also, FTA:

> Haskell is clearly moving towards dependent typing, which in theory, allows
> the expression of _arbitrary invariants_ that are maintained statically,
> without having to run the program.

Well, "arbitrary" within limits. Dynamic type checking is still strictly more
powerful than static type checking -- in the sense of what Boolean statements
it can test about particular values -- no matter how you slice it.

(No, I'm not arguing for dynamic type checking.)

------
orthecreedence
So Haskell will eventually be able to do everything lisp does, but it's
_impossible_ for lisp to do everything Haskell does. I smell bullshit.

Both languages have their place. Sometimes functional and type-safe isn't the
best way to go about something. Sometimes it is. Sometimes it depends on the
programmer.

There is no One Right Way or One True Language.

~~~
6cxs2hd6
Also, it's a weird comparison because Haskell is one, crisply-defined thing --
but what does he mean by "Lisp"? Common Lisp? Scheme? Racket? Clojure? Does
Dylan count? The only Lisp he mentions specifically is Emacs' Elisp -- which
would be a straw man to pick for a comparison.

Things like Typed Racket (and its port to Clojure.Typed) show that you can
have a lisp with static typing, as well as the traditional strengths of a
lisp.

You can also have a lisp like Dylan or Pyret that doesn't even use
s-expressions, but is most definitely a lisp.

------
X4
haskell evangelism? I mean I like haskell, but this theory or prophecy is
based on no facts.

------
m0nastic
It seems weird to think that there exists any language currently which will
wind up being the one-true-language.

As an industry (and a research area) there still doesn't seem to be any real
consensus around "what's best"; just a bunch of differing opinions and trade-
offs. I don't think any amount of evidence (were it even to exist) would
convince someone not amenable to strongly-enforced static types to see their
value.

The entire practice of software development seems oriented around feelings and
past experience. I can appreciate that their are groups of people doing
research to try and bring rigor and quantified data to the process, but if at
the end of the day, a developer can spend a weekend putting together a node.js
web app and have that take off and prove successful, you've pretty much lost
any opportunity to convince them that they should stop using their tools and
switch to some different tools.

I don't actually think there's anything wrong with that either; good for them
for being suspicious.

I decided to investigate Haskell about 8 months ago when I had an opportunity
to write a big system for my job. It fit well within my constraints and
requirements, and the little I knew of it at the time seemed like it would be
a good language to spend time getting to know.

I liked that everything in the language seemed like it got there through
reasoned debate and experimentation, and that it seemed like a language-
feature sandbox that more mainstream languages were eventually pulling from
(Perl advocates say the same thing about Perl, mainly that it already has all
the features that other languages are now trying to figure out how to
implement). I liked that they don't seem to punt on the hard problems (which
over time become more and more of the problems left for languages to address),
even if that means that doing complicated things in Haskell is complicated.

I don't know how I'd feel about Haskell suddenly becoming super popular
though. Even aside from the "God I hate that this band I've been into for a
while is now suddenly popular" trendiness, I don't think the community would
be able to handle sanely what an influx of massive amounts of new users would
do to things. It's hard enough getting all the category theorists and abstract
algebra professors to deal with the fact that Cabal takes lower and upper
bounds on dependencies.

If I could spend the entirety of my career using Haskell for everything, maybe
that would be great. I haven't gotten good enough yet to have strong opinions
about its failings, so I'm still very much in the honeymoon period.

But that seems like a silly thing to shoot for, even if I feel the same way
about Haskell in 10 years that I do now. And it seems silly to expect that
everyone else would feel the same way.

~~~
tel
I love Bob Harper's view that something like Type Theory will eventually
become the one-true-language. His arguments arise from a POV pretty different
from the standard argument here—it's not that some particular implementation
will win, but instead that the entire design space of languages we will
eventually gravitate to type theory because it's just _right_.

~~~
brudgers
At the end of the day who wants dependently typed bash?

It's a great intellectual position for happy hour at the campus pub. Yet from
a practical standpoint, it's hard to see how programming languages requiring
more attention to type system will facilitate banging out code for ordinary
problems more quickly.

There are times when it is really important to be able to prove code is
correct and times when it is enough to just provide a plausible answer. The
market for ML on Rails remains without validation.

~~~
rtpg
With something like Haskell (very good type inference) the types will only
bother you if you're writing wrong code (or writing particularly complex
expressions, I'm not necessarily against compilers complaining about things
that are complicated).

It's not a free lunch, but damn is it cheap.

~~~
brudgers
The Haskell compiler will complain if I am writing wrong Haskell code, or more
generally when the types are indeterminate and can't be inferred.

This not the same as 'wrong code' in the abstract. The code will run fine if I
don't pass in mismatched data, or more generally bad data. And if I am passing
in bad data, static typing doesn't give me good answers, it just keeps the
program from crashing. Don't get me wrong, there are times when crashing is
bad. But there are times when the cost of a runtime error is nominal and the
value of flexible code is high.

Static typing trades one type of cognitive overhead for another. The Java
program of 500 classes is its manifestation.

~~~
efnx
What is the cognitive overhead Haskell is introducing? Types? Because as far
as I can tell we use types in all OOP, they're just implicit and not checked.

~~~
brudgers
Static type checking, regardless of language, requires thinking about programs
in a particular way because one possible mode of failure is prioritized over
all other modes. It does so regardless of whether absolute type safety
deserves to be prioritized given the purpose of a particular program and it
does so regardless of whether absolute type safety is an appropriate concern
at a particular stage of the program's development.

Static typing can make "how do I get this to compile?" a design criterion
Consider year 2038 problem. In MySQL, various date types are coerced to the
timestamp type by design. Otherwise the program would not compile. Compilation
takes precedence over problem solving.

[http://dev.mysql.com/doc/refman/5.0/en/datetime.html](http://dev.mysql.com/doc/refman/5.0/en/datetime.html)

~~~
tel
I've personally found that static typing is an aid to comprehension and
thought. I spend more time fiddling with untyped code than typed code. I also
disagree that static typing prioritizes a particular mode of failure—the
notion of failing to typecheck is a rather general one.

~~~
brudgers
There are two contexts in which one can think about data types. The first
choosing among or constructing data types as abstractions. The data type as
metaphor is useful regardless of language. An important property of this
context is it's not just useful externally via an automobile class in a used
car lot application but internally with ports and pipes for I/O and threads
and locks and semaphores for processes and so on.

But the other context in which we select and choose and construct data types
is because a language insists upon it. Here our choices are not based on how
to best represent the world, but by how to package our metaphor into a pre-
existing schema. The very first time we compile our code, we have been forced
by the compiler to crystallize our code based on an early guess.

When a flat roofed building uses scuppers to provide emergency overflow
drainage, it is good if water passing through them makes a mess of the
plantings below and perhaps stains the facade. It indicates that the primary
drains are clogged before the roof collapses. Likewise, runtime type errors
might be preferable to zeros silently inserted into a database.

Static and dynamic typing each catch some types of errors at runtime at the
expense of masking other types of errors at runtime.

~~~
tel
I think runtime errors are a fine way of detecting such failings. I don't
understand why typing is at odds with that.

I think types make us write out the why next to the what. That why might be a
domain model justification, or something much more trivial. It's also
completely possible to encode an untyped regime in a type system. You're
always crystallizing your design, you just can either provide good information
to understand its failings and be more prepared to fix them. Or not and chase
logic errors throughout an undocumented, dynamic system.

------
edem
I fail to see the concrete examples you came up with to back your claims up.
Or...were there none?

------
mrottenkolber
Lisp can do that stuff. Point two depends on the implentation but point one is
standardized.

------
ruttiger
Why controversial blog post titles get clicked and why...

------
enupten
The impression I have from reading the comments, seems to indicated that many
believe Common Lisp not to have types !

With CLOS (and :before methods!), and optional type declarations, and the
awesome implementations which do type inferencing, you tend to catch far more
bugs compared to say, something like Python.

Yes, dear brethren, Common Lisp code (with SBCL) will not compile if the ftype
conflicts with the inferred types of the arguments.

Edit: Besides, the author, does not seem to have the same opinion about Lisp
vs Haskell today.

~~~
Grue3
It really is amazing how many errors SBCL catches during compilation. At first
it was annoying me coming from a more lax CLISP, but then I found that it was
always my mistake that resulted in them.

------
tehabe
How is something which needs a >700 MB runtime doomed to succeed. I think even
Java is smaller than that. Not sure though.

~~~
thirsteh
700 MB runtime implies every compiled binary is 700+ MB, which is obviously
untrue. You probably mean the compiler and standard libraries.

