
John Carmack: Thoughts on Haskell [video] - trevorhartman
http://functionaltalks.org/2013/08/26/john-carmack-thoughts-on-haskell/
======
pohl
I like this quotation:

 _Languages talk about being multi-paradigm as if it 's a good thing, but
multi-paradigm means you can always do the bad thing if you feel you really
need to, and programmers are extremely bad at doing sort of the the time-scale
integration of the cost of doing something that they know is negative. I mean
everyone will know...it's like "this global flag: this is not a good thing,
this a bad thing - but it's only a little bad thing" and they don't think
about how, you know, the next five years how many times that little bad thing
is going to effect things. So brutal purity: you have no choice._

In particular, I like the bit about integrating your technical debt function
with respect to time, over the time you have to live with the debt, and how
bad programmers are at thinking in those terms. We tend to think about how
much technical debt we have at a given fixed point in time, but the area under
the curve is what bites you.

~~~
rybosome
Reminds me of something a former boss used to say. "Pay me now, or pay me
later; either way, there has to be a payment". It sounds like Carmack is
saying that brutal purity always forces the "pay me now" approach since it's
harder to write this code, but it is worth it in the long run.

~~~
oneandoneis2
As credit cards have shown, people are very bad at making good decisions when
they don't have to pay for them until "tomorrow"

------
kybernetyk
I'm still waiting for his in depth article about his experiences while porting
Wolfenstein to Haskell - mainly to see if it's worth learning Haskell.
Beacause as he said in the video: Most examples in books (I guess that counts
for evangelization blogs too) are toy examples. And I'd like to know how
viable the language is for bigger systems.

~~~
thirsteh
This is highly anecdotal, but I've built and been part of very practical/non-
theoretical, large Haskell projects (100k+ lines, which is a lot for Haskell).
The only big complaint I have is that it's somewhat hard to do loose coupling,
i.e. for something somewhere to reference a type without either redeclaring
the type (when that's possible), or having a huge, centralized Types.hs that
declares all the types that are used in different places (to avoid cyclic
imports.) (Contrast with e.g. Go or ML where you have interfaces/modules
without 'implements'.)

This isn't unique to Haskell by any means, but it's the only real complaint I
have about Haskell as a language for non-toy projects. The benefits definitely
make it my go-to language. It's hard to list them all, but by far the nicest
feeling is the correctness: when your code compiles, 60% of the time your
program works every time. (Not to imply that tests aren't necessary--
QuickCheck is great for that.) It's an otherworldly feeling to write a program
not in terms of what to do, but what kinds of filters you want to put on
something, and have it just work (and either stay working, or break future
compiles if something's changed!) after compiling 10 lines of code, when you
would have written at least 50-70 and had to debug it in almost any other
language.

Edit: I'll add another complaint: Haskell is like C++ in that it's incredibly
easy for a codebase to become completely unmanageable if your team doesn't
have a common style/discipline. Go is a nicer language for
"average"/"enterprise" teamwork, I think, since it almost forces you to write
programs in a way everyone will understand. If you're in a team with good
programmers that you trust not to abuse the language, this is a non-issue.

Edit: Okay, another one: If you change your Types.hs, the recompilation can
take a long time in a large codebase, similar to C++. But GHC/Cabal keep
getting faster.

Think that's it.

~~~
evincarofautumn
Have you tried .hs-boot files for separate compilation with circular
dependencies? It’s not bad, but it does lead to some extra crapwork when
changing interfaces, much like with C++ headers.

~~~
thirsteh
I haven't, and it does look pretty cumbersome. So far a couple of Types.hs,
i.e. for functionally separate components, have worked out fine, but I'll
check it out if that becomes unwieldy.

------
ExpiredLink
> _Error establishing a database connection_

Diabolic, stateful database connections blocking a Haskell treatise! They
should have copied the database on each request instead.

~~~
chrismonsanto
I don't really get the joke. Generally functional languages will share more
data than imperative languages, and therefore will copy less data. In an
imperative language, it makes sense to copy whatever data you are working on,
so your modifications don't inadvertently affect other parts of the program.
In Haskell it doesn't.

Explain?

~~~
TylerE
No, typically functional languages end up copying _more_ , not less, since
typical algorithms will e.g. return a fresh copy of the modified structure -
the purists detest mutation. The benefits of data sharing, are, imo,
overstated outside of a few niche, datastore-y type areas.

~~~
SilasX
At the level of _source code logic_ it appears to be copying and returning a
new data structure. That doesn't mean the compiled, optimized code is
performing all the copies that the the program appears to be doing.

Pure FP = don't mutate variables in the program. Of course the actual
implementation can reuse memory blocks or else pure FPers would have to keep
buying new memory!

~~~
TylerE
Pure FP doesn't have variables. It has bindings.

~~~
marssaxman
This seems like splitting hairs.

~~~
kyllo
When combined with lexically scoped closures, if you try to shadow your value
bindings in a functional language and expect it to work the same way as
mutating a variable does in an imperative language, you're in for a surprise.

~~~
marssaxman
Sure, you have to restrict the scope in which rebinding is legal, but I don't
see how it's fundamentally any different than the use of a phi-node in SSA
form.

~~~
kyllo
Here's an SML example:

\- val a = 1;

val a = 1 : int

\- fun f() = a;

val f = fn : unit -> int

\- f();

val it = 1 : int

\- val a = 2;

val a = 2 : int

\- f();

val it = 1 : int // a is still bound to 1 as far as f() is concerned

Now in Javascript:

> var a = 1;

undefined

> function f(){return a};

undefined

> f();

1

> a = 2;

2

> f();

2

>

If you bind a val, then you define a function that references that val, then
you later shadow the val binding, then call the function again, the function
still sees the earlier val binding, because it's a closure of the environment
at the point where the function was defined, not at the point where it was
called. This is unlike variable assignment in imperative languages.

------
agentultra
AFAICT he doesn't reach any particularly satisfying conclusions and just
speculates that Haskell is an interesting avenue of research that may bear
fruit for game development.

It doesn't seem like he has put the same amount of effort in experimenting
with Lisp. He doesn't mention any attempt to port Wolfenstein over to Common
Lisp. Instead he seems content speculating from the same position many Lisp
doubters have after reading a few books and working on some exercises (which
is ironic considering his impetus for the Haskell project). I hope he gave
Lisp the same treatment as Haskell before he drew any conclusions but it
doesn't seem like he has from this speech.

Lisp for game development could be an interesting avenue (and has precedent in
AAA console development). The dynamic vs. static argument isn't the
interesting feature. Personally I think the symbolic model of computation is
far more compelling. I've read posts by programmers who've written a high-
level language for writing financial trading algorithms in Common Lisp that
compile down to optimized VHDL for running on FPGAs. Sure you don't have a
static analyzer to tell you you've done something wrong before you run your
programs but I've rarely seen that becoming an issue in practice at that
level. There are plenty of Common Lisp libraries that have been around for a
long time that don't require much maintenance which makes me wonder where this
belief that dynamic languages don't produce solid, maintainable code comes
from.

In my rather limited experience I find the over-specification required by
statically typed language to be a impedance to writing robust, compose-able
software (at least it's much more difficult and tends to lead to Greenspunning
if you try to go that route).

Either way... a very interesting talk and it's cool to hear that he's
experimenting with this stuff. Carmack is in a rare position to have such a
breadth of experience and deep technical knowledge that even just messing
around with this stuff might make waves throughout the industry.

~~~
bad_user
If you'll go read his opinions, you'll see that he has been a strong advocate
for static analysis. His position is very understandable since he spent a lot
of time working on complex pieces of C/C++ code, encountering a lot of subtle
accidental errors that could have been avoided with better tools for static
analysis or with a better language. He also worked mostly on client side
software, where the fail fast / fail hard strategy of projects built in
dynamic languages doesn't work so well.

Lisp vs ML vs others is a very subjective issue since languages involve
different kinds of trade-offs, so optimal choices depend on the project or on
personal style.

Also, this is John Carmack. For me he was a God 10 years ago and he's still
one of the best and most practical developers we have today. Seeing him talk
about Haskell is amazing.

BTW, I've been a big fan of Scala lately, and I don't see myself using much
Haskell precisely because its tools for modularity seem limited. In Scala you
can use OOP to build abstract modules, with the much hyped Cake pattern being
based on that. Dynamic languages are naturally more modular, however I still
prefer static languages.

~~~
sfvisser
> ...I don't see myself using much Haskell precisely because its tools for
> modularity seem limited.

> Dynamic languages are naturally more modular

Why is that? I'm pretty convinced pure and especially lazy languages allow for
great modularity.

See [http://augustss.blogspot.nl/2011/05/more-points-for-lazy-
eva...](http://augustss.blogspot.nl/2011/05/more-points-for-lazy-evaluation-
in.html) for an interesting view on this.

~~~
taeric
I would think it is because of how "modules" can effectively be "bolted" on
top of existing things quite easily.

Consider putting a new "module" on a bicycle. If you were doing it statically,
it would have to have the screws in the proper place to be able to attach as
it needs. Dynamically, however, you just use a zip tie to hold your piece onto
the bike.

To be sure, if you buy a nice bike, many of the common attachments have
"static" points where things can be added. If you want to place a holder for a
phone, however, you are much more likely to do something that is much more
adaptable at fitting things on.

becomes much more easy to bolt on funct

------
tel
Very worth noting that Carmack here is talking much about "pure functional
programming" above just functional programming.

------
srl
Here's the video: [http://youtu.be/1PhArSujR_A](http://youtu.be/1PhArSujR_A)
(the text on the page just introduces Carmack)

~~~
haxorize
He starts talking about FP/Haskell at the 2:06 mark.

[http://www.youtube.com/watch?v=1PhArSujR_A#t=126](http://www.youtube.com/watch?v=1PhArSujR_A#t=126)

------
Strilanc
Google cache:
[http://www.google.ca/search?q=cache:functionaltalks.org/2013...](http://www.google.ca/search?q=cache:functionaltalks.org/2013/08/26/john-
carmack-thoughts-on-haskell/)

The article is essentially just a link to the fourth part of his keynote at
Quakecon (
[http://www.youtube.com/watch?v=1PhArSujR_A](http://www.youtube.com/watch?v=1PhArSujR_A)
)

------
sirclueless
> Database Error: Error establishing a database connection

Looks like we brought down another one. I was looking forwards to reading this
too.

