
Things in Haskell which don't exist elsewhere at all - mietek
http://www.reddit.com/r/haskell/comments/1k3fq7/what_are_some_killer_libraries_and_frameworks/cbldpgy
======
warsheep
I've read most of LYAH, and I have no problem understanding mathematical
concepts, but man, Haskell people truely don't know how to convey this
"awesomeness" they keep talking about.

1\. Where do you use automatic differentiation? I've done machine learning,
signal processing, etc., but never even heard of it until now. Why should I
care? Your pitch should include that (especially when the wikipedia article
doesn't really provide _real_ use cases).

2\. What's special about lenses? I tried reading
[http://www.haskellforall.com/2013/05/program-imperatively-
us...](http://www.haskellforall.com/2013/05/program-imperatively-using-
haskell.html) but there's no summary of what this is about, and from the first
paragraphs it seems like a Haskell workaround for setters and immutability.
Again, I feel like the community is not pitching these things correctly.
People like me start reading, don't understand the point of it all, and give
up.

And I can go on... What is the target audience of these features (or Haskell
itself)? Is it people like me, or is it more hardware validation engineers,
automatic proof system developers, database people?

EDIT: If anyone is interested, edwardkmett has replied to me in
[http://www.reddit.com/r/haskell/comments/1k3fq7/what_are_som...](http://www.reddit.com/r/haskell/comments/1k3fq7/what_are_some_killer_libraries_and_frameworks/cbm1ic6)

~~~
tel
Lenses begin with getting and setting. Since Haskell is immutable, setting is
a little different—it's a method of quickly generating a mutation function for
a type. Lenses keep getting and setting bound together as a first-class value
which creates the idea of it being a value representing a "focus" on a
particular field in a complex, nested value.

Then you can compose these lenses together and retain these properties. You
can also pick lenses which operate with different multiplicities from 0 to
infinity. Finally, you can use their seemless connection to object
isomorphisms to create a very general interface for working with various kinds
of "similar" objects in Haskell.

At the end of the day you can write a first-class lens which represents
getting and setting over a set of parameterized Map keys deeply inside of a
stateful computation, mapping over anything that looks like a string, and
viewing/modifying it as decoded JSON.

    
    
       obj . ix "aKey" . each . _Object . ix "3" . _Number .~ 4
    

might modify a record in a type like this

    
    
       IsString s => SomeState { obj :: Map s s }
    

where the string values of the Map have a JSON schema like

    
    
       {"1" ..., "2" ..., "3": <aNumber> }

~~~
bjourne
But you haven't presented anything that is _special_ about lenses. For
example, your lens expression would be trivially translated to this Python:

    
    
        for o in obj["aKey"]: o["3"] = 4
    

The OP:s point was that lenses seem like a workaround for Haskell's lack of
mutable state.

EDIT: What I'm meaning is that to show lenses unique benefits, you would have
to come up with an example in which the code is more succinct than the
equivalent code implemented without lenses _in another language_.

~~~
tel
But, that isn't anything resembling a faithful translation. It's non-first
class, non compositional, only a "setter", lacks a decent error handling code
(I didn't mention that, but it's built into "mistargeted" lenses) and depends
upon parsing the JSON in some other step.

Lenses let you think of a JSON-encoded string as an actual JSON string without
ever explicitly doing the decoding due to their close connection to
Isomorphisms and "partial Isomorphisms" (called, non-standardly, Prisms here).

Furthermore, lenses don't really have anything to do with mutable state—they
just happened to form a convenient wrapper for using the State monad, but
that's really a coincidence.

Succinctness is difficult to grasp. It'd be a good exercise that I'm not going
to try in an HN comment to even translate the entirety of the concept embedded
in that one line into, say, Python. It would start to feel like an XPath
implementation.

(Edit: Also, as usual, the whole typesafe thing. That Python fragment can lead
to runtime errors. The Haskell one never does—using it inappropriately is
simply impossible.)

~~~
bjourne
Still, you haven't shown why anyone should be impressed by Haskell's lenses.
It's like you do not understand that isomorphisms and first-classness is
totally irrelevant to me and most programmers unless it leads to better code.

Here is a challenge for you or anyone else who loves lenses:

Take a small snippet of real source code you or anyone else has written and
uses lenses and post it here. I'll then translate it into Python that has the
equivalent effect. If the translation is impossible or is less pretty than the
Haskell original, I'll award you $1000 internet points.

~~~
tel
Let's say you've serialized a tree structure of versioned data as JSON.
Branches are arrays and leaves are objects.

    
    
        data OTree = Obj Object | Node (Data.Vector.Vector OTree)
    
        instance FromJSON OTree where
          parseJSON (Array as) = Node <$> traverse parseJSON as
          parseJSON (Object o) = return (Obj o)
          parseJSON _          = fail "OTrees are objects and arrays"
    
        instance ToJSON OTree where
          toJSON (Node as) = Array (fmap toJSON as)
          toJSON (Obj o)   = Object o
    

Now _some_ of these objects have keyed "version"s which are arrays of semantic
versioning numbers. Write a function which decodes and re-encodes a new tree
with each of these semantic versioning numbers incremented at the patch level.
If the version isn't in that format, just ignore it.

In Haskell you'd want to write a generic traversal over the objects of the
tree useful whenever you want even access to the contained elements.

    
    
        eachObj :: Traversal' OTree Object
        eachObj inj (Obj  o ) = Obj <$> inj o
        eachObj inj (Node as) = Node <$> traverse (eachObj inj) as
    

And then here's the finale, the actual lens code specific to the task.

    
    
        upgrade = _JSON . eachObj . ix "version" . _Array . ix 2 . _Number +~ 1

~~~
bjourne
That's a contrived example and not "real source code." Furthermore you are
leaving so many symbols undefined that it is hard to see what is going on.
Where does 'traverse' come from? Nevertheless, here is how you would do it in
python:
[https://gist.github.com/bjourne/6219037](https://gist.github.com/bjourne/6219037)

~~~
tel
It's extracted, not contrived. Updating nested attributes on a tree of objects
as a nice one-liner. The most contrived bits were that I didn't use the built
in tree type so I had to define more stuff explicitly.

Traverse comes from Data.Traverable but is exported with lens as lens can be
seen as a generalization of traverse.

------
boothead
The best thing that exists in Haskell and nowhere else I've found isn't a
library at all:

It's the experience of grubbing around in unfamiliar libraries (in my case a
database lib and a visualization lib), not knowing much Haskell and struggling
to get the damn thing to compile. The _very first_ time I got the types to
line up, the program ran and I had my timeseries graph from the database.

This is a magical, life changing experience.

Any other language I've used would have been many iterations of, "this is
null", "you passed a string where it was supposed to be a list of strings"
(looking at you python)! Now that I know it's possible, I much prefer to do my
thinking up front, and know that there's a compiler out there that's got my
back :-)

~~~
vidarh
I used to think this too, but realised that it is not quantitatively different
to me whether I spend the time getting something to compile, or spend the time
getting it to execute correctly.

I tend to prefer dynamic typing these days as a lot of what I do feels easier
to achieve when I can throw something together to test ideas without thinking
about types, and then refine the idea based on actually using a cobbled
together prototype, and I feel static typing often got in the way of that.

Especially coupled with Smalltalk style runtime inspection / modification on
error (e.g. for Ruby I use "pry" which lets me drop into a shell anywhere in
the program and modify or inspect state and then continue execution).

~~~
tel
I think it's a genuine weakness of static types that they inhibit prototyping
when you first begin. HM type systems tend to be liberal and powerful enough
that once you learn them well they're easier to prototype in—but that comes
with time.

In particular, type-driven-development is wonderful. You can write light,
compiler-checked specifications of code you haven't bothered to write yet and
see if your logic all typechecks even without implementation. I can run
through lots of ideas that way very quickly since the only thing I have to
produce is the high-level type skeleton of my program.

Then I just go through and implement it all.

It's a lot like test driven development except that your executable tests are
much more lightweight and carry a strong sense of the logic of your program.
If your tests don't hang together, you'll find out eventually. If your types
don't hang together, your compiler tells you immediately.

~~~
gruseom
This is a really interesting comment. It would be worth turning into a longer
post with examples.

~~~
tel
Put it in my queue. I'll post it here when I get around to writing it.

------
tel
The entirety of that thread has some very interesting discussion
([http://www.reddit.com/r/haskell/comments/1k3fq7/what_are_som...](http://www.reddit.com/r/haskell/comments/1k3fq7/what_are_some_killer_libraries_and_frameworks/))

------
tomp
Wow, that's awesome! Makes me really want to learn Haskell, something I have
been putting off for far too long.

------
sspiff
I can write libraries in COBOL and then proceed to claim "these libraries only
exist in COBOL, therefore COBOL is better than other languages", which is to
me exactly what this post reads like.

~~~
fusiongyro
That's pretty ungenerous of you. There are factual differences between
languages, and Ed's list is full of illuminating examples. You've wandered
into a conversation on /r/haskell between Haskell users about why they like
Haskell. It's insane to expect them to be less excited or biased.

~~~
sspiff
I'm not complaining about /r/haskell posters making pro-Haskell posts. I'm
guilty of similar things.

I'm just not sure what it's doing on the HN front page, and I find the
contents of the post lacking in terms of convincing outsiders into a
favourable opinion on the language.

There's nothing wrong with such a comment on /r/haskell, but I think it's out
of place here. I realize the fact that it is voted up to the front page proves
that many people think otherwise, but when I last checked, people were allowed
to voice contrary opinions on the Internet.

~~~
marshray
I haven't seen anyone trying to prevent you from voicing a contrary opinion,
only people politely trying to explain to you that they feel you are missing
an essential point of understanding. (OK, _I_ was a bit snarky, but it was on
point)

~~~
sspiff
You're right, I was a bit overreacting with the opinion bit, and slightly
thick-headed in accepting the arguments of some of the other posters.

~~~
marshray
We've all been there. No one who looks at Haskell comes away unchanged. :-)

------
joe_the_user
I played with reflection in Ruby for a short time and found it really
fascinating, until I realized everything got pathologically slow quite quickly
- especially if you add one reflection effect on another.

All this stuff sounds cool. Is it fast and scalable?

~~~
fusiongyro
The two things aren't especially commensurate. A lot of the fun of Haskell is
type-level stuff, which is resolved by the compiler at compile-time. There are
no direct run-time artifacts. Moreover, several of the things Ed talks about
have to do with concurrency, which isn't even something you can do with Ruby.
Speculative execution and STM both enable real multicore concurrent
processing. Even if it did perform as badly as Ruby, without the GIL and with
real concurrency it's in a better position. Adding complexity in the type
level does have one major performance effect though, which is on compile time.
Does it count as fast and scalable if the compile time reaches into the
minutes? In terms of compilation time, Ruby will always beat Haskell, because
it doesn't have any. This is a meaningful difference when you're trying to
iterate rapidly on your webapp, but most of the web frameworks for Haskell
separate out templating to help with this problem.

Most people eventually complain that high-performance Haskell doesn't resemble
the Haskell we're taught to program. This is less and less true as things like
iteratee-based I/O and really advanced libraries like bytestring and text
displace older facilities, but it is still something to think about. Haskell
tends to perform well by sacrificing lots of memory. This is why you see
discussion about "space leaks" rather than "memory leaks" and whatnot. There
are a lot of ways to address the problems that come up, but you need to have a
strong handle on the way Haskell works before you can accurately diagnose and
treat the problem. This is a significant barrier to using Haskell in
production. It doesn't take long to acquire the expertise to throw Ruby into
production. The road with Haskell is a bit longer and a bit more taxing,
because it's so very foreign.

~~~
vidarh
> several of the things Ed talks about have to do with concurrency, which
> isn't even something you can do with Ruby.

Concurrency has always been possible to do with Ruby just fine. There's just
been a limit on system level _threads_ until 1.9 (and there still is a
limitation in terms of the interpreter lock). There's on the other hand
nothing that has ever stopped us from multi-process concurrency, including
using shared memory (though with the caveat that Ruby won't let us put Ruby
objects there).

~~~
gtani
[https://news.ycombinator.com/item?id=6198068](https://news.ycombinator.com/item?id=6198068)

Haskell allows you to run "green" or userspace threads in the millions,
similar to what erlang and scala (akka) provide (and F#), with dispatcher
designed to run i nthose numbers. So it's something like Celluloid, but I
don't know how many people have used celluloid in production, in haskell the
runtime's been well documented (Simon Marlow's oreilly book) and tested in
heavily loaded systems.

~~~
FreeFull
Do note that the changes described in the paper to allow green threads in the
millions won't hit GHC until version 7.8.1 is released.

~~~
enigmo
You can use millions of threads in versions <7.8.1 as well, they'll just run a
lot slower if they're all actively doing IO.

------
33a
There are plenty of automatic differentiation libraries out there for C++,
python, Matlab etc. Not sure how you spin that as a unique feature of Haskell.

~~~
tel
The Haskell type system allows for avoidance of the perturbation problem
(combining mismatched infinitesimals is a type error) for one.

