

All computation is effectful (2009) - panamafrank
http://weedy-persistence.blogspot.com/2009/12/all-computation-is-effectful.html

======
SomeCallMeTim
For a long time now I've been looking at Haskell from the outside, and
wondering whether I should take the dive and learn the language. I usually
write apps and games, though, and my gut has always told me that there's very
little advantage to writing code that's primarily declarative, imperative, or
reactive in a pure-functional manner. Sure, I use some functional principles
to minimize code complexity, but I haven't made the commitment to learning a
new language to experience the purity.

Still, reading blog after blog telling me how awesome functional programming
is, and meeting people in person who swear by it, and who try to sway me to
the religion, made me _desire_ to learn FP, just in case they were all right.
I try to improve myself when I can, and ignoring other developers' advice
isn't a good way to improve yourself.

But after reading this well-written article, I am at peace with my long-
standing de facto decision _not_ to burn time learning Haskell. I'll continue
to use functional code organization when appropriate, and I'll probably keep
reading the articles from time to time, but to me _deterministic speed of
execution_ is far more important than "code purity." I'm a good developer
precisely because I'm good at understanding how code effects propagate, I'm
good at designing code to be clean and fast, and I know when to allow side-
effects and when to forbid them. I don't feel the need of a crutch to make my
code "more correct," and I already have access to several options to help me
make my code more concise. I wish I'd seen this (2009?) article sooner.

So no Haskell for me. Long live Python/Lua/JavaScript/C++/Go/and who knows
what's next. [1]

[1] Those are the languages I currently use the most. I make no claim that
they are better or worse in some abstract way than Haskell. In concrete ways,
however... ;)

~~~
chadaustin
You should probably still learn Haskell, because it will help you get better
at separating concepts in other programs you write. Ideas you will learn in
Haskell that you won't necessarily in the languages you listed:

\- type classes and their associated laws (e.g. generalize mapping: it's
obvious that you can map across a list. it's less obvious that you can also
map across a Maybe. it's even less obvious that you can map across a
function...)

\- sum types

\- functions as a distilled programming concept without ancillary
implementation details such as C's or C++'s "unique address" rules

\- restricted effects - subsetting IO for stronger static guarantees (like
c++'s const but way more powerful). STM was built this way. BufferBuilder too.
[http://chadaustin.me/2015/02/buffer-
builder/](http://chadaustin.me/2015/02/buffer-builder/) It's a powerful
technique in Haskell that you don't get in other languages.

\- the realization that monomorphization is a generics implementation detail

\- the feeling that you get from generic, terse, dynamic-looking code that you
can still rapidly iterate on in a repl

Haskell is not some pure ivory tower - there is plenty of imperative stateful
code written in Haskell. The value of Haskell is that it introduces a pile of
powerful ideas that you will carry through the rest of your career, even if
you don't write Haskell on a day-to-day basis.

~~~
codygman
> there is plenty of imperative stateful code written in Haskell.

Yep. For instance here's a program that logs into paypal to check your balance
in Haskell using screen scraping:

[https://github.com/codygman/hs-scrape-paypal-
login/blob/mast...](https://github.com/codygman/hs-scrape-paypal-
login/blob/master/Main.hs)

------
ridiculous_fish
Here is a Haskell action, that has the effect of waiting for 5 seconds:

    
    
        sleep 5
    

This is in the IO monad and therefore impure. As we know, it is better to use
pure functions when possible. Let us rewrite it as a pure function:

    
    
        seq $ [0..] !! 1000000000
    

This accomplishes the same thing with no side effects, and is therefore
better. Please note that spinning up your CPU fan is not a side effect.

Joking of course, but it does illustrate the point. Functions take time and
time is user-visible, thus all functions are effectful in that way. And C
programmers will roll their eyes at the idea that `getpid` has side effects,
while allocating a million node linked list does not.

Really this is just talking at cross purposes. Haskell's notion of purity and
side effects lives not on the physical machine, but in a formalism, and ghc is
its imperfect simulator.

------
dgreensp
There's nothing inherently slow or problematic about composing code out of
"pure functions." The implication that evaluating functions requires doing
lambda calculus, which requires a weird runtime like Haskell, is pure FUD.
Haskell is weird for other reasons, like the fact that expressions are lazily
instead of eagerly evaluated. You can compose your code out of pure functions
in most any programming language.

Even when we're talking about facilities provided by the runtime, the fact
that a programming language feature may have complex performance
characteristics, in exchange for allowing the programmer to think about the
problem in a more abstracted way, does not invalidate the abstraction. We
might as well have an article about garbage collection called, "There is no
garbage," making the point that at a lower level, all memory needs to be
explicitly de-allocated. Or one called, "There is no immutability," pointing
out that immutable data structures can only try to cover up the fundamental
nature of computers, which have mutable memory cells.

------
pron
Pure functions are beautiful, useful and desirable, yet I believe, like the
post's author, that a language that tries to enforce referential transparency
_everywhere_ is misguided, in that it places too big a burden on programming
while doing little to prevent bugs that really matter.

Not all bugs are created equal -- some are easier to introduce and/or harder
to find than others. A language that chooses to reduce bugs by placing non-
trivial constraints on the developer -- i.e. by increasing the mental burden
-- would do well to concentrate efforts on those bugs that cost more. IMO,
Haskell does the exact opposite, nearly eliminating data-transformation bugs
-- that are very easily found -- and doing little (though not nothing) to
reduce effect-related bugs.

Pure functional programming carries other burdens, too, some stemming from its
conceptual roots in lambda calculus, like the lack of a clear complexity
model.

I also reject the conclusion that if most costly bugs are related to effects,
our best course of action is getting rid of them altogether (or relegating
them to an opaque runtime). This solution seems strange, as side-effects -- in
spite of their name -- are the most central component of useful programs.
_Effects are the things we program_ (unless we're writing a compiler). This
solution seems to me like suggesting to a programmer that since typing is the
cause of wrist pain, she should type with her tongue. There are better
solutions already. Clojure's approach to memory effects is at least as
effective as Haskell's in preventing certain types of bugs, yet the language
places a much lower mental burden on the programmer.

Lastly, I think PFP has (unintentionally) caused many to believe it is the
only approach to a more "mathematical" mode of programming, and the best
course for formal software verification. Neither could be farther from the
truth. Haskell could not have prevented the bug discovered in Java's
(originally Python's) efficient sorting algorithms -- probably the most used
sorting code in the world today -- a few months ago, and it would have been
very hard to detect it even in Idris (if the algorithm is expressible at all
in that language). Instead, it was discovered with a software verifier for
imperative languages. It is true that PFP languages may depend in clever ways
on the Curry-Howard correspondence to help (force?) the programmer to prove
some properties of her algorithm, but spelling out a partial proof in the code
itself is not the only -- or even the best -- way to verify a program.

All this is not to say that we haven't learned a great deal from PFP and its
approach to software verification. But it is just one approach of many, and
one that is particularly intrusive.

