
Haskell Fan Site - allenleein
http://www-cs-students.stanford.edu/~blynn/haskell/
======
szc
Ben Lynn is a winner of the 26th IOCCC
[https://ioccc.org/2019/whowon.html](https://ioccc.org/2019/whowon.html) with
a tiny Haskell Compiler. We will be releasing the code in early July.

~~~
SkyMarshal
He's also the L in BLS Signatures and is currently at DFINITY building their
system using both BLS and Haskell.

[https://dfinity.org/team/](https://dfinity.org/team/)

------
pteredactyl
From my experience Haskellers spend more time talking about how perfect and
pure Haskell is than building real-world applications.

~~~
allenleein
We need more startup to use Haskell in production, like Dfinity.

~~~
pteredactyl
Has Dfinity launched yet?

~~~
pteredactyl
No

------
fnord77
> Programs should be easy to express.

in my experience, languages that make it easy to express small programs tend
to be counterproductive for large systems with many modules, many people
working on the system and the desire for fine-grained control.

~~~
willtim
Sounds like you perhaps have experience of dynamically-typed languages?
Haskell is statically typed, compiled, has a scalable multi-threaded runtime,
a very good FFI; and was designed around the notion that large programs should
be built by composing small programs.

~~~
est31
Globalized type inferrence is still a problem when you want to build large
programs. You change the implementation of one function and 20 functions
further there is a problem and you need to figure out all the interactions on
the path between. Rust strikes a great compromise here IMO.

~~~
ufo
I think that these days the Haskell best practices recommend adding a type
annotation to all toplevel declarations, to help keep error messages
manageable.

------
leoh
> I have seen languages that practically force the programmer to run a
> heavyweight specialized IDE, and that require 14 words to print 2 words.

I picked up Haskell again last week after a long hiatus and although one does
not need a heavyweight IDE, the build system still felt like a bit of a mess.
After adding `parsec` (a popular library for parsing) to a stack project, my
machine spent over an hour compiling dependencies at which I point I gave up.

~~~
thinkpad20
Try using nix instead. Aside from being a fantastic package manager for many
languages it’s a godsend for Haskell. The lengthy compilation times with
Haskell are one of the not so wonderful parts of the language, but they
largely disappear with nix, and the Haskell ecosystem on nix is very well
supported, with a large community. Honestly I’ve been developing Haskell for
years and have never once used stack. I did use cabal before I discovered nix,
but since then I’d hardly dream of using anything else.

~~~
mlevental
how does nix eliminate compile times???

~~~
thinkpad20
It doesn’t eliminate them entirely; it’s just very smart about allowing
packages to be built exactly once, and having a deterministic output such that
it can be determined before a package is built whether a prebuilt version is
available. This prebuilt version can either be the result of a previous build
on your machine, or on another machine served from a repository.

Of course, there are times when you need to compile yourself, but most slow-
to-compile packages, such as Aeson, Lens, Servant or the aforementioned
Parsec, and many more on top of that have prebuilt binaries available when
built from community snapshots (nixpkgs/nixos). You can even pin your package
definitions to guarantee that you’re building something that will have
prebuilt versions available. New project build is usually less than a minute
or two, sometimes substantially so (anecdotally). Again, not always, but
usually.

------
dgellow
> Writing code should be comparable to writing prose. A plain text editor with
> 80-character columns should suffice. Coding should feel like writing an
> email or a novel. If instead it feels like filling out a tax return, then
> the language is poorly designed.

So this page has nice statements like those ones, but no justifications. Why
should writing code be comparable to writing prose or writing a novel? I don't
see why that would result in a better implementation.

Prose is full of obscure and non-well defined rules that often require
subjective judgement. That's something I want to avoid when writing programs.
Program source codes are better written as clear and unambiguous as possible,
limiting potential misunderstanding whenever possible, where a person writing
prose will play with rhythm, symbolism, rhymes, atmosphere development, etc
and can use potential misunderstanding and multiple meanings to add depth to
their work.

Filling a tax return seems to be better IMHO, I don't need to develop my own
style over years and years of writing, I learn the (quite strict) rules, apply
them, anyone who also knows the rules can easily fix/extend/improve what I've
done.

~~~
scrumper
Surely the point there is that you should feel like you can write your idea
down any way you want, instead of being bound by an extremely tightly
constrained form that only lets you put very specific things in certain
places. Your tool is a blank sheet of paper and a pen; it's not in your way at
all.

"This year was a good year: I made $100,000 at my job although I did lose $500
in the stock market."

------
klipt
Given the laziness, you'd think memoizing an arbitrary pure function would be
easy in Haskell - at least as easy as Python, where you can do it with a one
line decorator.

But no, it's pretty complicated:
[https://wiki.haskell.org/Memoization](https://wiki.haskell.org/Memoization)
you even have to involve fixed point combinators. Pretty disappointing.

~~~
mruts
I love FP and have programmed Haskell in professional contexts. But I have to
admit, I don’t like it and it’s strong formalisms make so many problems much
harder than they have to be.

Take lenses for example. All of this unnecessarily complicated shit because
Haskell doesn’t have any reasonable record syntax.

I appreciate Haskell as a research project and it’s clearly pioneered many
important FP concepts: monads, free monads, trampolines, etc. But I would
never ever choose it for a new project. OCaml, F#, and Scala would be my go
to. Nice FP, but not so opinionated to cripple/slow you down when you want to
dip your toes in mutation. And in my experience, almost all non-trivial
programs require at least a little bit of mutation. And when it’s required, I
really don’t want to waste my time with IORefs or STRefs or whatever.

~~~
quickthrower2
I'd like someone to blog about Haskell who has say an average IQ, has learned
Haskell (used it professionally) and then can come back and talk to points
like this.

It seems a lot of Haskellers have super high IQ, and can grok Haskell Lens
like it's a toy truck. Then they write a blog post that's pretty hard for a
beginner to disect.

Or maybe they struggled with Lens but by the time they spent 5 years with
other gurus in a professional setting they finally got it, but have forgotten
how hard it is to learn this stuff.

~~~
kark
See the “monad tutorial fallacy”:
[https://byorgey.wordpress.com/2009/01/12/abstraction-
intuiti...](https://byorgey.wordpress.com/2009/01/12/abstraction-intuition-
and-the-monad-tutorial-fallacy/)

~~~
quickthrower2
Absolutely - that's why I added the professional experience caveat. Someone
who has just got a concept 5 minutes ago shouldn't write a tutorial about it.

------
cloudhead
Nice, but it misses the point that `IO` is actually pure (referentially
transparent) in Haskell:

    
    
      let x = putStrLn "hello"
      in do x ; x
    

is equivalent to:

    
    
      do putStrLn "hello" ; putStrLn "hello"

~~~
rocqua
I though purity was about having no side-effects. Or, in other terms. If you
call the a pure function twice with the same arguments, you get the same
output.

~~~
TylerE
Those two statements aren't actually equivalent.

Haskell isn't pure. A truly pure programming language would be completely
useless as it wouldn't be able to actually manipulate state at all.

~~~
arianvanp
No! Haskell programs are actually pure!

The main function returns a _description_ of the actions it's going to do; it
doesn't actually execute those actions. You can pass these descriptions around
as values, extend them, match on them and what not.

Think of it like Command pattern in OOP. When you pass a command around the
command doesn't actually execute. It's a value that at some later stage will
be executed.

~~~
millstone
Is there a way to distinguish between a main function that "actually" executes
the actions, and one that does not? I think that laziness makes this a subtle
question.

~~~
arianvanp
there are no functions that actually execute actions. That function is in the
Haskell RTS, and not in your program.

~~~
Faark
Isn't that very close to being a vacuous semantic distinction at this point? I
don't see much of a difference to "My C# main never actually executes. It's
the .NET runtime that interprets the IL bytecode."

I'm still a beginner, but having to figure out when and where to flush text to
the console doesn't make haskell feel different from other imperative
languages.

~~~
lmm
> I don't see much of a difference to "My C# main never actually executes.
> It's the .NET runtime that interprets the IL bytecode."

You can't actually reason about C# programs in those terms - e.g. you can't
really say whether two C# programs are equivalent except in the vacuous sense
of being equal as strings or ASTs. Some basic C# functions offer only
operational semantics, so you have to reason about how and when those
functions are "actually executed" if you want to be able to understand
programs that call those functions. E.g. you can't know whether "(foo(),
foo())" is equivalent to "x = foo(); (x, x)" without thinking about "how foo()
is actually executed". In contrast you can reason about Haskell programs that
contain IO actions without having to understand how those IO actions are
executed.

------
6thaccount2
I thought it was neat that they talk about trying to parse the "J" programming
language.

~~~
tluyben2
Yes, direct link[0]. Though most of the subjects are interesting in my
opinion.

[0] [http://www-cs-students.stanford.edu/~blynn/haskell/jfh.html](http://www-
cs-students.stanford.edu/~blynn/haskell/jfh.html)

------
revskill
Due to the purity, programming in Haskell is like learning how to configure a
system, instead of coding.

Configuration is always pure, only the underlying runtime system is not.

------
Barrin92
the post focusses on the practicality of haskell and addresses purity but
leaves the biggest problem out: laziness, which is more or less the reason
haskell exists, is the wrong default. In particular lazy I/O which can
introduce horrible bugs or straight up mess with execution order of critical
code, which in my opinion is an absolute no-go in an industrial language.

~~~
nothrabannosir
_> lazy I/O which can introduce horrible bugs or straight up mess with
execution order of critical code_

I've never seriously programmed Haskell, so honest question: I understand the
first point, but how is the second possible? Isn't the point of passing
RealWorld around that you enforce the order of execution through data
dependencies rather than expression order? It always seemed like a very
elegant (and incredibly impractical :) ) solution to me.

~~~
tsuraan
OP is somewhat conflating two different things: non-strict function evaluation
and lazy IO. With lazy IO, you can get, for example, a String from a file.
That string is actually a lazily-constructed chain of Cons cells, so if you're
into following linked lists and processing files one char at a time, then it's
fun to use. The dangerous bit comes in when you close the file after
evaluating its contents as a string:

    
    
        fd <- open "/some/path"
        s <- readContentsLazy fd
        close fd
        pure $ processString s
    

Now, processString is getting a string with the file's contents, right? Nope,
you have a cons cell that probably contains the first character of the file,
and maybe even a few more up to the first page that got read from disk, but
eventually as you're processing that string, you'll hit a point where your
pure string processing actually tries to do IO on the file that isn't open
anymore, and your perfect sane and pure string processing code will throw an
exception. So, that's gross.

That's a real issue that will hit beginners. There's been a lot of work done
to make ergonomic and performant libraries that handle this without issues; I
think that right now pipes[0] and conduit[1] are the big ones, but it's a
space that people like to play with.

[0] -
[https://hackage.haskell.org/package/pipes](https://hackage.haskell.org/package/pipes)
[1] -
[https://github.com/snoyberg/conduit](https://github.com/snoyberg/conduit)

~~~
garmaine
Seems like the problem is that file close is strict, whereas the file handle
should be a locked resource that is auto-closed when the last reference is
destroyed (and “fclose” just releases the habdle’s own lock).

In other words the problem seems to be (in this example) that the standard
library mixes lazy and strict semantics. A better library wouldn’t carry that
flaw.

~~~
dllthomas
So that's actually how it works if you just ignore hClose. The problem is that
it sometimes matters when things get closed, so they do "need" to expose the
ability to close things sooner.

~~~
garmaine
Sort of. It eventually gets cleaned up by the garbage collector, yes. But that
could be after an indeterministic amount of time if the GC is mark-and-sweep.
My point is that in this circumstance reference counting could be used
regardless so that as soon as the last thunk is read, the file is closed. The
'hClose' is basically making a promise to close the file as soon as it is safe
to do so.

~~~
dllthomas
> as soon as the last thunk is read, the file is closed.

That's probably doable. It's true that when the only reference to the handle
in question is the one buried in the thunk pointed at by the lazy input, it
should be safe to close it when a thunk evaluates to end-of-input (or an
error, for that matter).

I'm not sure whether or not it'd be applicable enough to be worth doing. The
immediate issues I spot are that a lot of input streams aren't consumed all
the way to the end, and that you'd have to be careful not to capture a
reference anywhere else (or you'll be waiting for GC to remove that reference
before the count falls to zero).

~~~
garmaine
Also things like unix pipes or network sockets, where the "close" operation
means something different as there are multiple parties involved. Arguably the
same is true of files as you could be reading a file being simultaneously
written to by others.

~~~
dllthomas
Right. It's easy to handle the simple case, but honestly "let the GC close it"
works fine in the simplest cases.

------
bitmadness
Used to code quite a bit in Haskell, but I've moved on. I believe in the
philosophy "Simple things should be easy, hard things should be possible";
Haskell forgets the first part.

------
29athrowaway
Languages influence each other. Few people have used OCaml but most will get
to use OCaml-influenced features in a mainstream language like Swift, Rust,
Kotlin, Scala, etc.

------
quickthrower2
I read the article, but I am confused. Where is the "Fan Site". Is this a
tutorial for creating a fan website?

~~~
projektfu
I believe it’s all the links at the bottom represent his fan site.

------
hardwaresofton
Here's a chunk of code from a haskell project I work on that shows how concise
and nice haskell can be:

    
    
        class EntityStore b m => TagStore b m where
            -- | Add a new tag
            addTag :: b -> Tag -> m (Either DBError (ModelWithID Tag))
    
            -- | Get all tags
            getAllTags :: b -> Maybe Limit -> Maybe Offset -> m (Either DBError (PaginatedList (ModelWithID Tag)))
    
            -- | Find tags by a given ID
            findTagsByIDs :: b -> [TagID] -> m (Either DBError (PaginatedList (ModelWithID Tag)))
    

That bit at the start (`EntityStore b m => TagStore b m`) roughly reads "Given
that types 'b' and 'm' exist that satisfy the typeclass 'EntityStore', they
satisfy the typeclass 'TagStore' if they implement the following methods".

If your eyes glazed reading the sentence above -- this is a way of composing
interfaces. Any TagStore-capable thing is required to be EntityStore-capable.

And here's that being used:

    
    
        createTag :: ( HasDBBackend m db
                     , HasCacheBackend m c
                     , MonadError ServantErr m
                     , TagStore db m
                     ) => SessionInfo -> Tag -> m (EnvelopedResponse (ModelWithID Tag))
    
        createTag s t = requireRole Administrator s
                        >> validateEntity t
                        >> getDBBackend
                        >>= \db -> getCacheBackend
                        >>= \cache -> addTag db t
                        >>= ifLeftEnvelopeAndThrow Err.failedToCreateEntity
                        -- invalidate cached tag listing & FTS results
                        >>= \res -> invalidateTagListing cache
                        >> invalidateTagFTSResults cache
                        >> pure (EnvelopedResponse "success" "Successfully created new tag" res)
    

I prefer the explicit 'bind' syntax (>>/>>= are pronounced 'bind') to do
notation, because of the clarity you get when the code is laid out, and how it
encourages modular functions

All those `HasDBBackend m db` and `TagStore db m` incantations mean that this
function knows about the world is that it has a DB backend, a cache backend, a
tag store, and it knows something about the kind of errors it can throw
(`ServantErr`). Then come what the actual function does (after the =>) -- Take
in a SessionInfo, a Tag, and produce an "action" that when run will produce a
EnvelopedResponse (ModelWithID Tag) value.

This style is called mtl and it's just _one_ of the big patterns in the
haskell community, but the big feature here is that isolation -- this function
can only do things that it knows about by way of constraints (`TagStore db m`
means a `TagStore`-capable thing is accessible to you) -- this is much safer
than having functions that can just do anything at any time.

I've said it numerous other times, but the kind of stuff you do in haskell
trickles into other languages -- see Florian Gilcher's talk from RustLatam
2019[0] -- it's basically on this same concept but in Rust (one of the reasons
I absolutely love rust, they've bolted on a fantastic type system).

[0]:
[https://www.youtube.com/watch?v=jSpio0x7024](https://www.youtube.com/watch?v=jSpio0x7024)

------
wellpast
Haskell is fun, lots of fun, but it is not a practical language by virtue of
it being strongly typed.

By practical I mean strongly typed PLs are not forgiving as your world and
perceptual view of the world changes over time -- as it does for any business.

Your program must pass a proof to execute. For the most part you do not need
to explicitly express your types; Haskell can infer them. This is often
pointed at by Haskell enthusiasts as some kind of nicety. But it's not because
(1) the types are still there; your program still has to cohere wrt to the
type system; (2) strongly typed programs with explicit type annotations are
much easier to read and manipulate, so you should use them anyway.

As your world and assumption changes over time you will find yourself having
to do a ton of work to have everything cohere for the type proof again.
Haskell enthusiasts also like to say things like with strong typing
"Refactoring is easier". This is incorrect. What they are pointing at is that
Haskell will of course catch certain (type mismatch) bugs during a refactor,
but it will also find many many things that would not be a bug were you using
a lighter or no static type system. So ironically this in my experience
_discourages_ refactoring because you become exhausted from all the
unnecessary labor required to make rather nominal changes to your domain
model.

Dynamic languages on the other hand, when in the right hands, don't obligate
this whole unnecessary labor of effort.

I feel that there's an elephant in the room whenever I talk to someone who is
enthusiastic about strong typing in a business or real world context. And
that's this: that their enthusiasm is more to do with the pure fun of
manipulating a type check/prover and playing in these weeds than actually
getting real stuff done over time.

Another common retort from the strong typing enthusiast is they will point
back at a dynamically or loosely typed system they've worked on and point out
what a mess it is and then proceed to show how they don't run into certain
classes of bugs any more, like null pointer exceptions. What goes unanalyzed
is the competence of the team that made the mess, that it was the fault of the
skills at play and not a necessary fault of not having types. What also goes
unanalyzed is whether the extreme cost of playing the strong typing game is
worth an occasional NPE (which are of the immensely easy and fixable kind of
bugs) here or there -- in a standard business (ie, not mission critical)
context.

I realize this is going to be hotly contested and that I'm stepping on toes
here, but I think all of this is true.

Learn Haskell by all means, but be honest with yourself when comparing it to
the successes of more pragmatically designed languages. This should also be no
surprise; Haskell is a research langauage born in academia, NOT born out of
long practitioner experience. Compare to Elm -- another strongly typed
language for the frontend -- which was born out of a doctorate with limited
real world experience. Then compare to, say, a Clojure (another esoteric
language) but which was born from extensive pragmatic experience and look at
the choices it made and how in many cases they run against Haskell's.

~~~
xedrac
> with strong typing "Refactoring is easier". This is incorrect.

I can't tell if you're just trolling, but have you ever done a large refactor
in a dynamically typed language such as Python? It's a runtime minefield,
taking a huge amount of testing effort to gain any level of confidence that
you got it correct. In a language like Haskell or Rust, it's usually as simple
as getting the thing to compile.

~~~
lacampbell
IME a code base with high test coverage lets me refactor far more securely
than an expressive static type system. Static typing is useful, but if I had
to pick one it would be tests all the way.

~~~
verttii
The cost of a high test coverage could be an additional 3x lines of code for
the tests. While you still need tests with an expressive type system you can
basically cut the code coverage to a fraction. And maintain the same level of
confidence in refactoring.

~~~
lacampbell
_While you still need tests with an expressive type system you can basically
cut the code coverage to a fraction._

Where on earth do you get the idea that static typing lets you cut the code
coverage to a _fraction_? I am going to need specific examples to believe this
claim.

~~~
verttii
Using a comprehensive and a well crafted type system limits the choice you can
input to your functions. Drastically. If you utilize the type system right it
essentially forbids you from supplying faulty data as input to your functions.

After a large range of (faulty) inputs are already forbidden by the type
system you're left to write tests that actually test the business logic. I
don't see how this would not cut the required test coverage compared to for
example dynamic type systems where virtually any input is allowed and has to
be covered by tests.

~~~
wellpast
This is all false.

You can get input data verifications dynamically or statically. You don't have
to use static type verification to verify data inputs and dynamic validations
are much more expressive and simpler. See the immensely expressive Clojure
spec or Ruby Validations and compare to any type system.

~~~
verttii
Couldn't find anything on Ruby validations (drop me a link?) just something
about the db ORM/engine.

However, Clojure's specs indeed seem to be runtime checks. Seems very
different from a type system, really just packages functionalities for the
primitives rather than having an expressive type system.

I may be understanding this wrong but I fail to see what's the added benefit
of moving checks to runtime in contrast with the performance penalty it
produces. Or maybe it's just an improvement to an otherwise dynamic language,
like Erlang/Elixir has Dialyzer.

Seems to me it's more of a mechanism to handle the errors or produce code that
handles inputs a static type system would prevent from being used in the first
place.

~~~
wellpast
Runtime checks are much, much more flexible (because they are ad hoc), more
compose-able (they are written using the PL itself), _and_ more capable (you
have the full breadth of the PL semantics).

I could go on with concrete examples to demonstrate each of these merits but
here is a big one:

In a statically typed architecture you typically see a typed domain object
(ADT) used -- eg, Person -- and you let the type itself "validate" (eg,
Person.name, Person.age are non-null/required fields, etc). And _wherever_
Person flows you are obligated to adhere to this form/validation requirements
OR you must create a new type and map from/to these two types. This is already
obviously a bad idea.

In a dynamic language you can define specs universally (like you are obligated
to in the static typing case) or you can define a spec for different scenarios
and different functions.

Let's say I have a process (sequence of function calls) that ultimately
cleanses/normalizes a person's name (I'm making up a use case here). This code
_does not need age_ but in a statically typed system you will be passing
around a Person object and all sources of data must "fill in" that age slot.
Now there are a lot of responses from the statically typed people to this
scenario [1] but if you follow them all down honestly, you will end up with a
dynamic map to represent your entity and you will best be within a dynamic
system like Clojure to optimally code and compose your code.

[1] The only other response is to say who cares if I have to "fill in" age and
to this person I say I feel sorry for the people six months, one year, etc
that have to build on top of your code.

Here's the Ruby link you asked for-- it's actually rails,b ut it ca be fully
decopuled from ActiveRecord fwiw:
[https://guides.rubyonrails.org/active_record_validations.htm...](https://guides.rubyonrails.org/active_record_validations.html)
(Spec is far more decoupled and designed for general use.)

~~~
bam365
Not an expert on spec, but I suspect that there are a lot of data that are not
representable with combinations of predicates: GADTs, rank-n types,
existential types, type families, data families, and other more advanced type
system features.

Either way, the tradeoff is this: statically typed languages are guaranteed to
be free of type errors, but no such guarantees can be made with runtime
assertions. Whether or not you use a tool like spec to help you make those
assertions, the fact that they happen at runtime prevents them from making any
guarantees about type safety. Even if you use spec literally everywhere, as
you suggest (and no one actually comes close to doing anyway), you are still
not guaranteed of type safety. Dijkstra explains why:
[https://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/E...](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EWD303.html)

