
Haskell’s Niche: Hard Problems - fogus
http://cdsmith.wordpress.com/2011/03/13/haskells-niche-hard-problems/
======
dododo

       Haskell shines at making frequent fundamental changes.
    

my experience of haskell is completely the opposite of this. haskell is
terrible when you want to make big changes---it's often quicker to just
rewrite completely from scratch. fixing all the little inconsistencies is a
real pain. want to add an extra parameter to a type class? want to remove an
attribute from a data type? sure, if you picked the right abstraction from the
start, this is a non-issue, but that's not how it is with a hard problem.

the type system gets in your way; this is why i now use python (colleagues
have noted the same problems when using c++; we prototype in python, scale up
in c for the slow bits). often when i'm fleshing out ideas or trying a new
approach, i don't need everything to be consistent---just the bit i'm working
on, even then i don't care too much. if things don't quite work, i'd like a
quick way to find an inconsistency (often i write a consistency checker
function for each class that i use when debugging, that checks types and
structures).

i find haskell fundmentally useless for tackling hard (research) problems for
this reason alone.

~~~
merijnv
> The type system gets in your way

I think that anyone saying this suffers from a fundamental misunderstanding of
how to use a type system like Haskell's. I encounter it often with people
coming from C/Python who start writing typeclasses, data types etc. and only
afterwards start adding functions and then struggle trying to get the right
types for their functions.

Fundamentally, programming is a search problem. We are searching for a
sufficiently good solution (program) in the space of all possible solutions
(all possible compiler inputs). I am of the opinion that the type system can
significantly help in searching for such a "good solution". In the words of
Conor McBride: "Sometimes it's easier to search for good programs in the space
of well typed programs, rather than in the space of ascii turds."

Personally, I start by writing down type names (just names, no implementation
or anything) and function names of what I want to do with said types and most
importantly what the type of said functions should be (inventing new type
names and functions as I discover I need them).

Say, for example, a game would start with GameWorld type and an update
function of type "GameWorld -> some events -> GameWorld".

This way I start fleshing out the skeleton of WHAT I want to do, without any
regards yet as to HOW I plan on accomplishing it. Once I think my skeleton is
somewhat complete I define all my functions to be "undefined" (actually, I do
this as I write them down, of course). undefined has all possible types in
Haskell, so it always typechecks, but throws an exception when evaluated.

Then I start replacing the undefined's with actual code that matches the
signature and start implementing typeclasses/datatypes as required while
writing the functions. I will occasionally compile the code to see if
everything so far actually type checks.

~~~
dododo
that's how you write haskell the first time. that's fine. easy, nice, simple.
you've written a lot of code; it compiles, it works, no problem.

then you realise that you want to change a data structure or a function needs
some information that you want to weave through it or you used the wrong
abstraction. then it's a pain in the ass.

~~~
ezyang
There are two points to be made here:

1\. With a static type system, you can make the key change at the data type
declaration level, and then chase down the compiler type errors to figure out
all the place you need to change it. You need to do extra legwork in
dynamically typed languages to get the compiler to tell you this sort of info:
how many times have you changed some variable name in Python and forgotten to
update the reference somewhere obscure?

2\. Yes, if you have some pure, functional code, and then you decide you want
to have a feature that uses state, fixing things up can be a bit tedious. I
think there's an interesting space here for refactoring tools that help take
out the tedium. I will also note that this restriction is a good thing
sometimes: you know how you add a global here and a global there and suddenly
you have an unmaintainable blob of global state? Haskell asks you to "slow
down" and re-think about what you actually want to do. It encourages you to
use the minimum amount of language features to get the job done. This leads to
more maintainable code.

~~~
tel
I think there's a space for partial compiles that Haskell doesn't take
advantage of well. Correct me if I'm wrong.

Assume that you've got a simple dependency tree on defined names in a program
and you want to refactor the type of one of the leafs. The type system is
_great_ here in that it'll track those mismatches straight up to the root.

The trouble appears if you want only an experimental type change at the leaf.
It would be excellent to get successful _partial_ compiles threaded up the
tree. In other words, it should be possible to make a cut in the type
dependence tree in order to test a functional subset of your program under
more rapid experimental changes.

Once you've iterated enough to decide on a new formulation of that branch of
the dependency tree, you can continue attempting typechecking up to the root
node.

~~~
ezyang
One thing I've seen done here is copy paste the method into a new name
(usually with a leading underscore) and then make the appropriate changes.
It's not hooked up to the rest of the system (but that wouldn't have worked
anyway), and you can still compile it and poke at it.

~~~
tel
That works as long as it's a simple dependency tree like I was talking about
and there is a single cut with that property. It's pretty likely you'll have
convergent nodes (like changing a datatype's interface late in the game) and
there's no simple cut that can let the thing compile.

I mean, sure, this can all be manually avoided, but I still think that having
GHCi "compile as much as typechecks" instead of stopping before loading any
symbols would be nice.

------
ekidd
I find that Haskell shines at a specific _kind_ of hard problem, namely those
involving significant math. It's particularly good for problems involving
abstract algebra. Why?

1) The Haskell community tends to rely heavily on math—it's part of the
culture. Part of this comes from Haskell's functional nature, which favors
mathematical approaches, and part of it comes from a community tradition of
using _more_ math to work around the limits of functional programming.

2) A typical Haskell library is usually an algebra defined over an abstract
data type. Haskell has excellent abstract data types, and good pattern
matching, which makes this a natural way to work.

3) The QuickCheck library generates random input data for functional programs,
and makes sure that certain invariants always hold. For example, you can use
QuickCheck to verify that (reverse (reverse xs)) is equal to xs.

4) If your problem domain has a certain mathematical structure (specifically,
a topos or a closed Cartesian category), then there are some really slick ways
to map it onto a Haskell DSL. See my notes at
[http://www.randomhacks.net/darcs/probability-
monads/probabil...](http://www.randomhacks.net/darcs/probability-
monads/probability-monads.pdf) (PDF) for an example involving Bayesian
filtering and particle systems.

5) Haskell's strong type system make it feasible to work at _very_ high levels
of abstraction without going completely insane. For ordinary CRUD apps, I
actually prefer dynamically-typed languages, but when it comes to scary math,
a good type system makes life a lot easier.

6) Because Haskell is so steeped in math, it tends to force my brain into
"math mode" well before I reach the mathematical heart of a problem.

So I'm not convinced that Haskell is the ideal language for small DSLs. (Ruby
shines, here, for almost exactly opposite reasons.) But when there's enough
math involved, Haskell is absolutely a wonderful language.

~~~
kunjaan
Have you tried F#? If so, could you say the same things about F# as well?

Also could you explain the last point "Because Haskell is so steeped in math"?

------
jfm3
"Reason 1: Haskell shines at domain specific languages.

"[...] if you can embed a domain specific language into the programming
language you’re using, things that looked very difficult can start to appear
doable. Haskell, of course, excels at this. [...]"

Says who? Just because I can write `f(x)` as `f x` with full featured operator
precedence and associativity tables doesn't mean I can introduce new syntax
into the language. I can't even find a moral equivalent of ELisp's rx package
for Haskell.

"[...] Lisp is defined by this idea, and it dominates the language design, but
sometimes at the cost of having a clean combinatorial approach to putting
different notation together."

This is either weird nonsense or the old variable capture argument. You can
write crappy functions in any language too, that doesn't mean we should take
away the programmer's ability to write new ones.

"Reason 2: Haskell shines at naming ideas. If you watch people tackle
difficult tasks in many other areas of life, you’ll notice this common thread.
You can’t talk about something until you have a name for it."

I applaud the willingness to appeal to natural language, but this isn't even
close to true.

This blog post seems like an argument for Haskell's value based on specific
times the author has enjoyed programming in Haskell. I call post hoc ergo
propter hoc shenanigans.

~~~
jerf
"Just because I can write `f(x)` as `f x` with full featured operator
precedence and associativity tables doesn't mean I can introduce new syntax
into the language."

There's two definitions of "DSL" and I find mixing them to be a bad idea. I
prefer to confine "DSL" to a fully-fledged actual _language_ , with its own
parser and evaluator. Haskell is just about the only language I know that
makes this easy enough to actually consider as the solution to a problem.

Then there's "glorified API to look like a language", a definition I don't
like but is clearly in current use, and the one used in this post. Haskell is
pretty decent at that, but of course you can't escape from the fact you're
using Haskell. But then, people pretty freely use the term "DSL" in Ruby where
you can't escape the fact that you're in Ruby. In either case, "adding new
syntax" is not actually part of the general definitions of DSL nowadays. The
"adding of new syntax" is an illusion, and I guess that's part of why I don't
like this definition of DSL, there's actually an element of deception in it
and in practice I have found people not 100% comfortable with the base
language the DSL is in often end up deceived and less effective. And either
way, if you really, truly want some new syntax, Template Haskell can mostly
keep up with Lisp anyhow, but not entirely. There may not be, at this moment,
a precise match for the rx package already on Hackage but I believe Haskell
has all the necessary functionality to create one, up to and including
compile-time regular expression compilation. Though given the way Haskell
works, it is also the case that compile time RE compilation is way less
useful, because it's easy to make it compile only once even at run time with
the way Haskell works. As is so frequently the case, what takes a macro in
Lisp does not require a macro in Haskell.

"This is either weird nonsense or the old variable capture argument."

No, it's a functional purity argument. Lisp in practice has the same problems
with mutation and state as pretty much any other language. Insert debate about
importance of functional purity here. (That is, I'm not making the argument
now, I'm just letting you know that's what was being referenced.)

~~~
metageek
> _No, it's a functional purity argument._

I'm pretty sure it's a reference to the fact that combining macros can be
risky; one of the risks is variable capture.

~~~
jerf
That's a subset of the functional purity problem. It's a relatively-commonly-
understood one, but Haskell has taken the argument far further than most, such
that that is now only a subset of a larger argument about state and mutation.
The argument is not merely that you've got some state being mutated in a bad
way, the argument is that the fact you've got state mutating _at all_ is a bad
thing. In this case, it's state in the compiler phase, but it's ultimately the
same argument. The argument would be that combining any two things that mutate
state is inferior to trying to combine two things that don't, at any level.

I'm personally still skeptical of this general argument, but I mean that in
the "true" sense of skeptical. I think in a lot of ways the benefits have
panned out as promised but there's some costs still swept under the rug.
Rather a lot of the development ongoing even now in the core Haskell community
can be viewed as various ways of lowering or eliminating the costs associated
with their approach.

(For instance, the still-ongoing, but rapidly converging, changes associated
with trying to create a decent string library that does not treat strings as
linked lists of integers. On the one hand you can see it as normal feature
work, but on the other it's a way of trying to go from having an unusually bad
string story induced by functional purity to potentially having an
exceptionally _good_ one. It's this sort of work that actually keeps me
interested in Haskell; I don't know of any other community doing so many
actually new things to prove out their philosophy, instead of rehashing
mutation-based OO in yet another way.)

------
kamechan
haskell's niche: PL theory research. as a professor of mine said, "it's an
incubator for PL theory". that said, are there merits to non-PL researchers
learning it? i think so. for one, the functional paradigm is increasingly
being integrated into the mainstream languages and the "genpop" is starting to
use FP concepts to solve problems. haskell is supposedly "the purest"
functional language. therefore, if you like FP, studying haskell is like
tapping into "the source".

while i find functional constructs aesthetically more pleasing than imperative
ones, a reason i often hear given in support of using functional languages is
the promise they offer w.r.t concurrency/parallelism. in the words of the same
professor i quoted earlier, "concurrency/parallelism is currently broken in
most other languages. it's like building skyscrapers atop matchsticks". i
believe this now. i also believe haskell's approach to be quite a big step in
the right direction (STM and atomically). an outstanding paper on the topic
exists here [http://research.microsoft.com/en-
us/um/people/simonpj/papers...](http://research.microsoft.com/en-
us/um/people/simonpj/papers/stm/beautiful.pdf)

regarding haskell's type system...in my opinion, hands down the best currently
in existence. yes, it's hard to work with at times. i strongly disliked it at
first. yes, the type errors can be a bit cryptic at times, at least until one
gets used to them. the upside? there be real magics there. and i'm not talking
about devices that begin with the letter i. the majority of the time, if i can
get code to type check it works flawlessly.

working with it over the last 6 months has really changed my thinking about
programming languages and has actually swayed me to go further in that
direction in my studies.

don't mean to sound like a fool in love, but ...

