
Exploratory Haskell (2015) - mrkgnao
http://www.parsonsmatt.org/2015/12/09/exploratory_haskell.html
======
junke
This example does not fit my experience of "exploratory" programming. Here the
author implements DPLL from a textbook, and while there is a gap to fill
between pseudo-code and Haskell, the problem is already well-defined. Think-
upfront-then-code works quite well in those cases. Having compiler feedbacks
surely helps to get it right.

But exploratory to me means that the problem is not really well defined or
understood in the first place. Just as you draft things on paper to get a
clear idea of what you want to do, you can use your programming language to
express parts of the problem more or less clearly. That also means not being
afraid of getting things wrong in the process and being able to code
generically. Now, with some languages, you can't explore. I tried in Ada,
which I actually happen to like, but every choice was a commitment that would
take much time to adapt later, so I actually sticked to pen and paper. I think
it can be easier with ML type systems, where it might be easier to have
loosely coupled definitions, but I am not sure.

~~~
dllthomas
Yeah, I agree this is a poor example of exploratory programming.

I actually do think Haskell does a good job for exploratory programming,
though. Whether I'm writing Haskell or C or Python, as I code I'm making
decisions about how to represent and organize things. When some of those
decisions inevitably turn out to be the wrong ones, I have to figure out what
code needs to change in tandem. Tests can help reveal incorrect behavior, but
far better to have a tool that can point at the specific inconsistent
expressions and tell something about what's wrong with them.

------
hellofunk
This is interesting, and I'm curious and respectful of Haskell, but when I
read a post like this, while I appreciate many of the strengths of static type
checking, it just makes my head hurt to see what this guy goes through to
please the compiler, and I'm left thinking that I'd just rather stick to a
dynamic functional language like Clojure or Erlang and sketch ideas out more
quickly. Maybe I'm just not patient enough for Haskell.

~~~
chongli
I think the approach taken by the author is simply the wrong one. Writing a
huge amount of code without any clue what the types are going to be is just
silly.

Haskell makes it very easy for you if you work with it in a more interactive
fashion, asking it to infer types for you as you go along. One of the best
features for doing this is Typed Holes [0].

[0]
[https://wiki.haskell.org/GHC/Typed_holes](https://wiki.haskell.org/GHC/Typed_holes)

~~~
mightybyte
I think in this case it's a reasonable approach to writing the `dpll` function
because that function doesn't actually care about the specifics of the types.
This is a nice example of encapsulation and implementation hiding, a concept
familiar to object-oriented and functional programmers alike. He could have
easily gotten it to compile by stubbing things out as follows:

    
    
        data Clauses = Clauses
        data Symbols = Symbols
        data Model = Model
        isTrueIn = undefined
        isFalseIn = undefined
        findPureSymbol = undefined
        minus = undefined
        including = undefined
        findUnitClause = undefined
    

I think it's more a question of personal preference than right or wrong.

------
striking
I love the concept and idea behind Haskell, and even reading this piece was
very cool. But as much as I can sort of guess at what's happening, the steps
being taken to simplify the original pseudocode still seem like some kind of
space magic.

I see stuff like "Learn you a Haskell" and whatnot, but it all seems so
theoretical compared to, say, writing a web server or parsing CSV or building
a programming language with Haskell. I get monads. But I don't get how to
bridge the gap between my knowledge and what I see here.

How have other people gotten to this point?

~~~
jerf
You might consider Rust instead. It has a nontrivial type system, but the
benefits are a lot more clear.

I hypothesize that if you're still interested in Haskell after that that you
may find it easier to learn. Don't know if many people have walked that path,
though.

~~~
runeks
It's the purity that's the challenge; you can basically construct your types
just as accurate or inaccurate as you want. But purity forces you to
understand a couple of things first, in order to be productive (eg. monads,
monad transformers, type classes).

In my view, Haskell is in the opposite spectrum of Rust when it comes to
performance over abstraction. Haskell abstracts _everything_ away, and only
manages to get good performance because of purity (I would argue), while Rust
wants to be close to the metal, preferring performance over abstraction (e.g.
with zero cost abstractions).

~~~
jerf
"In my view, Haskell is in the opposite spectrum of Rust when it comes to
performance over abstraction."

That is why I suggested it. It's a different "hair shirt" than Haskell, but
it's one that's a _lot_ easier to explain the benefits of.

------
runeks
I've realized that all my programming is exploratory. Every time I implement
something, I end up knowing more about what I've implemented, enabling me to
create an even better implementation the second time. This cycle repeats until
I'm satisfied with the result, or I need to go to the bathroom.

The real challenge to writing code is understanding the nature of what you're
trying to express, down to its smallest, intuitively correct constituents.
While you are in this state of understanding, the code writes itself.

------
hyperhopper
I don't use haskell very much, but I'm curious what the use is of defining

    
    
        type Symbol = String
    

Why not just have -> String -> in the function's type? The only thing I see
this gains is forcing you to add extra boilerplate everywhere converting
strings to "Symbol"s

~~~
chrissoundz
It is just a type alias as far as I know. I suppose it adds more 'annotation'
to your program, in addition if you decide to change the type of Symbol to be
an Int or String String then you have a single reference to change initially.

------
CJefferson
While this is indeed interesting, I feel I should point out Haskell is one of
the worst languages out there for writing a SAT silver which performs remotely
well, as all modern algorithms work fundamentally on in-place modification of
data structures, so you just have to put your whole program in the IO monad.

~~~
dllthomas
> so you just have to put your whole program in the IO monad

You have to put the bits that actually modify things in IO or ST. While it's
better Haskell style to keep as much as pure as you can, when you do need to
modify things Haskell still works just fine.

~~~
platz
There is no edict that says haskell programs cannot live in IO.

Also, it is especially rich for some folks to use 'no true scottsman' without
even being a scottsman.

~~~
CJefferson
I'm not a Haskell expert, but I am a sat expert, and worked with good Haskell
programmers. We managed to get decent performance, but what we ended up with
looked a lot like a C program translated into Haskell, so it wasn't clear we
had really gained anything. Memory safety I suppose.

~~~
dllthomas
"Like C, but with memory safety and QuickCheck" sounds like a pretty good
language for a SAT solver... "Idiomatic Haskell won't produce a very fast SAT
solver" is something that doesn't surprise me.

------
mrcactu5
sometimes when I get stuck on a small part I let the compiler suggest types.
this is different workflow than sitting back and trying to figure it all out
yourself.

~~~
quchen
Sometimes I’m not even sure how I developed Haskell pre-GHC-7.8.1 (when typed
holes were introduced).

------
KennyCason
I also have never understood the argument “In dynamic typed language X you can
just start prototyping, but in static typed language Y you have to fight the
type system just to get it to compile” If your types aren’t lined up but it
compiles, even in a dynamic typed language, you’re program won’t behave as
expected, i.e. you are likely to have a bug. e.g. object + int, and it
compiles. How does one really save time by not worrying about these things? I
almost always start coding from the core data structures up, even when
prototyping. A seemingly opposite approach as OPs post, not saying my method
is THE correct method :)

~~~
dguaraglia
Yeah, that's kind of misguided. The biggest advantages of dynamic languages
when doing "exploratory" programming are late binding, built-in syntax for
commonly used data structures and duck typing. Those allow you to scaffold a
lot of stuff pretty quickly, try the bit of code that you care actually care
about, and then decide from there whether you want to do a full implementation
or just use the snippet you wrote.

Compare that to a language like older Java, where you need to build a class no
matter what, most likely you need to implement a few interfaces and add a lot
of exception handling just to get the scaffolding in place, and if you want to
initialize something as simple as a hash map you have to do it the
painstakingly, line by line way. Sure, your prototype may (or may not,
depending on how serious you were about handling those exceptions) be more
solid, but it took you double the time to get there, which is basically the
opposite of what prototyping should be.

With all that said, and being a huge Python fan, I still feel like strongly
typed languages should be a requirement for bigger projects that you plan to
maintain for years. Python, as great as it is, starts hurting a lot at a
larger scale. I'm really looking forward to using something like MyPy once my
projects start being Python 3+ by default.

~~~
KennyCason
Agreed.

I have had to run vardumps on some large Python/JS code bases trying to figure
out what the behavior was because it was unclear what the variables being
passed in was. Definitely a pain as the project grows larger. I used to not
understand why people who use Ruby/JS/Python were not fans of massive
refactoring, until I owned a large PHP code base and saw first hand how hard
it was to refactor. Search/Replace only gets you so far. Compared to of
course, the powerful refactoring tools you get in a Java language via IDE.

------
dschiptsov
While this is a true 80th level hipsterism and snowflackery, it is also a
beautiful case to realize that there was reasons behind choosing Common Lisp
as the primary language for AI.

The Common Lisp code is almost as easy to read as the pseudocode of the book
and it is absolutely unnecessary to go through all that type clutter and fancy
compositions to satisfy the type-checker. There is zero advantage in doing all
this static typing acrobatics.

The AIMA supplementary Common Lisp code is definitely worth looking and it is
_the_ case study for demonstrating the advantages of dynamic typing.

BTW, AIMA python code is also very nice, short and clear but order of
magnitude slower compared to the compiled native code state-of-the-art
implementations of Common Lisp produce.

BBTW, Swift3 port would be really cool.

