Hacker News new | past | comments | ask | show | jobs | submit login
A Pythonista's Review of Haskell (yingw787.com)
112 points by yingw787 on Jan 31, 2020 | hide | past | favorite | 102 comments



Making it to the end of an almost 2000 page Haskell book in only 3 months is quite the feat.

While quite confusing to many, myself included, I would've loved to hear more about the types vs terms distinction, and overall about what's possible with type level programming. That's something fairly unique among programming languages after all.

I'd certainly be interested in reading more of the author's thoughts if he continues down the path of using Haskell to solve problems.


I would highly suggest https://thinkingwithtypes.com/ - it's not a 2000 page monster, and specifically addresses what you are interested in.


I've been reading that book occasionally for months but honestly, many parts are somewhat too advanced for me for now. I've sort of backed off from type level programming for now to maintain productivity with building actual things. Still, I'd like to read about it from someone coming from imperative languages.


Thank you! I’m on sabbatical so I was reading full time. I believe John (the guy who wrote the set of notes I credited) started writing them two years ago, and did two passes over the book. I also skipped a bunch of exercises nearer the end (parser combinators I need to review). Don’t feel bad if you take your time!

Python is still my strongest language, and I’m planning on shipping product for a bit after this, so I will be using that for the next few months. I’ll do my best to circle back to Haskell, I love the community and see its potential in what I want to do. Subscribe to my email list if you want to stay updated!


> IMHO, the solution to high cognitive load, given a high enough reward, is just persistent, methodical execution towards learning.

It will never converge towards being easy to handle. The load doesn't come from being inexperienced, it comes from keeping the big picture in your head (all your types and their interactions) instead of iterative thinking.


this isn't true - at least not in my years of professional experience

I barely keep anything in my head most of the time when programming in haskell. It's kind of related to the "emphasis on structure" point from the article. All of a type's "interactions" are just functions. There aren't any second-order things to consider due to referential transparency.


I agree, type signatures & inference and referential transparency offload the cognitive load from your head to the compiler/IDE.

Functions that actually work on the data structures are generally guaranteed to be able to work on the said data. This way you don't need to constantly be mindful of what the data looks like, the compiler does that for you.


This exactly. Although the effect is much greater if you turn on some complier warnings (like -fwarn-incomplete-patterns). The emphasis on coming up with types that contain the guarantees you need makes it so once it compiles, you don't have to remember what guarantees you need to have. Although doing that is not nearly as convenient in Haskell as it could be (types often don't compose well, and you need to add strictness annotations here and there to prevent `undefined`), it is actually possible which is a big step up from languages where you can't even try like C or Python.


In Python

type(type) is type

evaluates to True. And that is not permitted in mathematical type theory that the Curry-Howard correspondence applies to.



And the only reason the language pragma is deprecated is because TypeInType is now just how things are, by default, with no opt-out.


On lobste.rs somebody pointed out that undefined satisfies the Curry Howard correspondence for Haskell, and I was wondering whether the proper analogue for Python would be NoneType in that case.

Comment here: https://lobste.rs/s/ovebeq/pythonista_s_review_haskell#c_gz0...


What Python and other dynamically typed languages call "types" aren't types at all in the theoretical sense. In type theory, types are properties ascribed to programs that describe their behavior and are part of a language's static semantics (AKA checked at compile time). Dynamically typed languages are really untyped (or from a certain perspective, "unityped," meaning that every expression has the same type: https://existentialtype.wordpress.com/2011/03/19/dynamic-lan...).


This is a very interesting perspective! I’ve never thought about dynamic languages as a single typed static language. Thank you for sharing!


Note many dynamically typed languages have static types. Objective-C, Java, TypeScript, Dart, etc. The dynamic types bring new capabilities above and beyond what is possible with static types.

(That is, the dynamic-or-static choice is a product type, not a sum type!)


> The dynamic types bring new capabilities above and beyond what is possible with static types.

Not really. "Dynamic type" just means an object that carries around a description of its structure (or more commonly, a pointer to a description). You can do the same thing in many languages with static types. (Including Haskell. GHC has special support for this pattern.)


What special support does GHC have?


GHC can derive a Typeable type class instance for any data type which allows values of that type to be stored in a Dynamic wrapper. For soundness reasons the Typeable type class is special in that it can only be derived; you can't implement it manually. The runtime types of Dynamic values can be queried and compared and you can attempt to extract a specific known (concrete) type, which will succeed only if the runtime type matches. There are some limitations; for example, you need to know the exact type of the value you're extracting, not just a type class constraint, so you can't e.g. add two Dynamic values directly even if they hold the same type of value at runtime with a valid Num instance.

Alternatively, you can just define a recursive sum type expressing all the possible runtime types of the dynamic expression. In Aeson for example there is a type which can encode any value which might be found in a JSON file, which is basically everything you can express in Javascript apart from function closures. This doesn't require any special support from GHC and works well in most statically-typed languages.


I've heard this called "unit typed" (see the Wikipedia page of this name), I think unitype may be incorrect.

i.e. Haskell includes the unit type ()


Haskell's unit type is what other languages call "void", the type with exactly one value (not counting 'undefined', Haskell's dirty little not-secret). That's different from Unitype/Any, the type that contains all values.

Bob Harper uses "unitype" which is a pretty strong argument for the name's validity


> not counting 'undefined', Haskell's dirty little not-secret

There is no "undefined" value in pure Haskell. There are expressions that have no value, due to a runtime exception (error, undefined, division by zero, memory allocation failure) or non-termination (unbounded recursion). Either way, evaluating the expression does not produce a value for the remainder of the program to operate on—you can't write a (pure) program like `if isUndefined expr then x else y` since—assuming isUndefined is a pure function and doesn't ignore its argument—the whole program is undefined if expr is undefined.


>Haskell has no null definition of type Char, at least according to this Stack Overflow answer. Python doesn't distinguish between Char and String types with single and double quotes, so empty char is empty string is an empty list. It seems weird to me that for all the typing wealth Haskell provides, this base case doesn't exist, though I personally don't know what it is.

An empty string is equivalent to an empty list []

If you want a null use the maybe monad.


As anyone who has used C will recall, a char is just a small integer. And just the same in Haskell, a char is exactly one character. Once you understand that, this confusion makes about as much sense as expecting an integer to be "empty".

I suspect this kind of question would be better asked on the Haskell IRC channel or similar rather than trawling stack overflow for insights. If someone has the relevant missing insight they can quickly point it out and save a lot of confusion.


it does make sense. 1/0 = undefined aka null. The thing with haskell is that the concept of null is typed as part of a sum type meaning it will be handled with no runtime errors.


The author was comparing

    "a"
    'a'
    ''
in Python and Haskell. In Haskell, the first one is a list of Char (which happens to be exactly the same thing as a String) the second one is a single Char, and the third one doesn't exist because you can't have an empty Char like you can an empty String, because while a String is a list and has a length, which can be zero, a Char is a character, not a list, and simply can't be "empty" in this way. In Python, all these things are strings, and single and double quotes are just alternative syntax for strings.

In C you also have a char type and you also have single-quoted char literals, and the third line is a compile-time error in C too for exactly the same reason, because a char literal represents a char, not a list of anything, and so it can't be empty in the way that a string can be empty.

From the post:

> Python doesn't distinguish between Char and String types with single and double quotes, so empty char is empty string is an empty list.

This is basically right, but the subtle misunderstanding here is that there is no such thing an empty char or indeed a char of any kind in Python, the closest you have is chr() and ord() to get integers out of strings. Indexing a string just gives you another string of length one, there's no char type in Python. If you wanted to implement cons, car, and cdr in Python such that (cons (car "abc") (cdr "abc")) = "abc" you wouldn't be able to do it in the natural way, because the type that (car "abc") needs to have doesn't exist in the language. Strings in Python are almost a list type but the elements of that list aren't representable.

So, coming from Python, the misunderstanding about why there's no "base case" for char as there is for string is understandable, but it doesn't really have much to do with Haskell or with any kind of runtime error handling.


I already 100% understand what you're saying. No need to clarify. You're misunderstanding me.

>Once you understand that, this confusion makes about as much sense as expecting an integer to be "empty".

My comment was in reference to the above. An empty number does make sense because of the of 1/0. There is nothing wrong in including a null in a set if there are operations that are illegal. Such operations can't return something so you can use a null to represent the absence of something. This is entirely different from an empty container type.

>So, coming from Python, the misunderstanding about why there's no "base case" for char as there is for string is understandable

The author is looking for a null value not an empty container type or "base case". I quote: "Haskell has no null definition of type Char,"

Basically he's looking for the equivalent of "None" in haskell. He has slight confusion in thinking that an empty container type or empty string is equivalent to a "None."

Because he literally said he's looking for a null he is indeed talking about all things associated with null.

The closest thing to null in Haskell is the maybe monad which is like null but without a runtime exception.

His confusion arises because he feels that if in haskell you can do this:

  stringfunction (x:xs) = ...
  stringfunction [] = ...
Then you should be able to do this:

  charfunction x    = ...
  charfunction null = ...
which is impossible in haskell, so--->

I'm saying that similar functionality can be achieved with:

  charfunction (Just x) = ...
  charfunction Nothing  = ...
The above is 100% handling a null, one of the main sources of runtime errors.


We're just pointing out different things.

I'm pointing out the different semantics of single-quoted literals as the source of the confusion, with "empty char" being directly mentioned in the post as something that exists in Python, which, as of course you know, it doesn't.

If the confusion was about how to add a nothing value to a type, why did it arise specifically in regards to Char, and not, say, Integer? The question "why are the char types different between Haskell and Python in this way" is best answered by pointing out that Python doesn't have a char type at all, and all of these things are strings.

Of course, you're entirely correct that you can use a sum type to handle nulls.


>If the confusion was about how to add a nothing value to a type, why did it arise specifically in regards to Char, and not, say, Integer?

I illustrated earlier because of the similarity between char and str, he believed that because of recursive functions on str has base case [], he believed that char needed one too.


> If you wanted to implement cons, car, and cdr in Python such that (cons (car "abc") (cdr "abc")) = "abc" you wouldn't be able to do it in the natural way

Sure you can!

    >>> car = lambda x: x[0]
    >>> cdr = lambda x: x[1:]
    >>> cons = lambda x, y: x + y
    >>> cons (car("abc"), cdr("abc"))
    'abc'


But that cons is broken, such that you can't construct lists of strings.

For a correct implementation of cons, the following gives a list '("a" "b") of two strings:

    (cons "a" (cons "b" ()))
and (list "a" "b") is just sugar for the above. This doesn't work if you redefine cons on strings to mean concat.

In other words, if (cons "a" "bc") gives you "abc", then it's not cons.


    *** Exception: Prelude.head: empty list


Yeah, I don't know why Prelude.head doesn't return a Maybe. Anyway, the point stands.

• In Java forgetting to check for null will crash your program.

• In C, forgetting to check for NULL will lead to undefined behavior.

• In Haskell, you either have to (a) check for Nothing, or (b) explicitly say "crash the program if this is Nothing" by using a function like fromJust or head, which internally does (a). This is always an improvement over C, and often an improvement over Java.

Side note: I like Rust's convention of consistently naming these might-crash-your-program methods "unwrap" and "expect".


Yeah I think rust has better defaults here.


I get the choice. It's unintuitive for basic types to be wrapped in a maybe mondad. People expect addition, subtraction and division to return numbers. To have division be the only operation to return an optional is a bit off.

I get the choice of why it wasn't done with haskell even though I disagree with it.


You cannot have a null exception in haskell.

Yes, I'm aware of that 1/0 returns an exception.

I'm saying that it makes sense to use nulls because 1/0 is an undefined operation. Therefore to represent the output of 1/0 with a null makes sense. So the existence of nulls makes sense. That is not haskell code I'm typing up there. "1/0 = undefined" is not code.

If you read my reply you will see that my statement isn't referring to an exception caused by 1/0. It's actually referring to a null exception which is completely different.

I am explicitly saying that the "null" in haskell is part of a sum type (the maybe monad) and in that case the "null" or nothing will be handled with no run time errors. You cannot have a null exception in haskell.

Never did I say Haskell has no runtime errors, and 1/0 is not an exception caused by a null. Please analyze and reread my statement.


Isn't that just due to `head` being a non-total function?

Nothing to do with null per se.


Well, Haskell calls the empty list the "null" list (see List.null), so I could say that List.head crashes when it encounters a null. :P

Anyway, the real lesson is that even if your language has sane defaults that force you to check that your list isn't empty, or that your pointers aren't null, a bad library function like List.head can hide that check away from you and crash anyway, bringing you back to square one in terms of the type system preventing unexpected crashes.


The null works with cons. So it's not a proper null. A proper null should throw an exception if you try to do any operation on it.


FYI OP: I posted your review on r/haskell where it got several comments: https://www.reddit.com/r/haskell/comments/ewjnf4/a_pythonist...


Thank you very much for sharing it there! The more the merrier :)


Quote: "I don't think you can do anything like this with Python or any other practitioner's language that I'm familiar with"

So, then you're not familiar with C/C++, Java, C#, Pascal/Delphi, Ada - to say the least.


So, then you're not familiar with C/C++, Java, C#, Pascal/Delphi, Ada - to say the least.

Well, yes, strictly speaking you can do just about anything in just about any Turing complete language if you work hard enough. However, expressing patterns like that with the powerful, general semantics and the elegant syntax of the Haskell version is not something that is offered by any of the other languages you mentioned.

It was perhaps an unfortunate choice of example, because many languages do have first class support for some sort of array/list type and some sort of optional/nullable type and so might offer a reasonably tidy implementation of this specific case. However, in Haskell, the exact same technique would apply just as much to any choices of Functors. Indeed, you can often write generic algorithms that work with any choices of Functors (or Applicatives, Foldables, Monoids, etc, depending on what minimal functionality is required for your algorithm to make sense). Likewise, there are very many generic algorithms already available in libraries that can be used with both the standard instances of those typeclasses and new ones you write yourself. This is the big win for Haskell's type and typeclass system, or what the original author refers to as its emphasis on structure.


Not sure about the last two you listed, but those languages don't support parametric polymorphism which means that although you can write a `fmap` method for things, the compiler can't check its type properly. At best you can write an interface, but that interface won't be able to guarantee that the functor returned by fmap is the same one you started with.


Java and C# at least should support (parametric) polymorphism via Generics. In Haskell you don't need GHC Generics to achieve parametric polymorphism though, it's used for different things.


I think they mean type classes/constructors [1] since they're talking about fmap, though I'm not sure what the proper name is for that concept. What name would you use?

They're trying to say that Java doesn't support writing a single function like so:

    <T<A> extends Iterable<A>, A, B> T<B> fmap(T<A> iterable, Function<A,B> mapper)
I.e. you can't pass ArrayList<A> to a method which takes List<A> and maps to List<B> and know for certain that you get an ArrayList<B> out.

[1] https://en.wikipedia.org/wiki/Type_class#Higher-kinded_polym...


Well data constructors are a part of algebraic data types which don't really exist in OOP. Unless you consider the class itself (or an interface) or its constructor as the algebraic data type constructor, and its properties/parameters as the values.

I'm not so sure fmap would make semantically as much sense in something like Java as it does in FP.


Yes, of course you can encode it in any language you want, but the point is that ML-ish languages make it extremely natural to think like this, and Java makes it impossibly painful. Haskell makes traditional imperative algorithms painful, though, which is arguably a greater loss.


> ML-ish languages make it extremely natural to think like this

You say that, but actually fmap doesn't exist in OCaml (or I assume any other common MLs). Of course you can embed a language and type system within it that does have it (https://blog.janestreet.com/generic-mapping-and-folding-in-o...) but that's different.


Using List.map and Option.map is more verbose and less general, but it's still most of the way there.

OCaml still has currying, algebraic data types, "implicit" returns, tail call optimization, etc., and it encourages recursion instead of loops wherever applicable.


Of all the languages I've seen, Swift seems to be the best compromise between ML style FP and imperative programming.


Did you try F#? How would you rank it?


Unfortunately, I haven’t. The only ML family languages that I have significant experience with are SML and OCaml.


What's the equivalent of

    -- zs' = [Just 2, Nothing, Just 4]
    zs' = (fmap . fmap) (+1) [Just 1, Nothing, Just 3]
in C?


At least on the D side:

    Nullable!int[] zs = [2.nullable, Nullable!int(), 4.nullable];
    // no partial application, and
    // separate template parameters and runtime parameters
    // so (fmap . fmap) (+1) is one function
    Nullable!int[] zs2 = map!(apply!(i => i + 1))(zs).array;
The C version probably has a lot more function pointers, and may use nonstandard compiler functions like closures. But in principle you could do the same thing.


That's not the same, it's just composing the maps for arrays and nullables. The whole point of fmap is that you can write

    (fmap . fmap) (+1) [Just 1, Nothing, Just 3]
but also

    (fmap . fmap) (+1) Just [1, 3],
rather than using

    (list_map . option_map) (+1)
and

    (option_map . list_map) (+1)
respectively.

The usefulness of this is arguable. In my biased opinion as someone who favours OCaml (which lacks the typeclasses that allow this) over Haskell, I think it's minimal. While it's definitely useful to recognise that lists, options and promises are examples of the same pattern, I don't this generalisation has many benefits in code (at least in MLs where you don't need this pattern just to print things).


> The usefulness of this is arguable.

And I have argued it on D, and we very deliberately did not go with the range semantics. Optional types are not ranges unless you're compulsively hunting for forced commonality. Might as well treat any value as a range of 1.


They are not ranges, but they are functors, as are ranges. The commonality between ranges and option types is that they are both functors and this is not forced.


I guess I subscribe to the school of thought that says that types are supposed to model the domain information of the data they represent. I know what it means when a value is either provided or not provided. I know what it means when a field has one of a number of values. I don't know what it's supposed to represent, in a domain sense, when a value is "a functor."


Note that a value is not a functor, its type is. When some type T is a functor, this means that T contains (as part of its definition) a type variable which you can change by mapping functions over values of type T.

For instance, the list type `[a]` and the option type `Maybe a` are both functors. The associated type variable is `a`. This type variable can be manipulated by using `map` to apply some function `f : a -> b` over values of both `[a]` and `Maybe a`. Applying `f` using `map` over such values would yield values of new types, namely `[b]` and `Maybe b`.

This implies that `map f` is a general way of expressing the application of function `f` to values of any functor type. The notion of functor types is therefore supposed to represent any kind of abstract containment relationship. In other words, any time you have the situation of an arbitrary type being wrapped in some kind of "wrapper" (list of..., set of..., range of..., optionally present..., etc), you are dealing with a functor type.

The functor abstraction allows you to apply ordinary functions to such values in a uniform way, regardless of which particular kind of container or wrapper we are dealing with.


Yeah I get that, I don't get what it is supposed to say about the domain. Ie. the real-world facts that the data type models. That's why I approve of separating option types and range types. "Apply to value if it exists" seems like a different operation from "apply to all values"; even if it's the same one on a deeper level, it feels like it's the same on a level deeper than the reality it represents.


I think I can somewhat glean what you're saying, but I feel like this is a part of reality, not something beyond it. The fact that this similarity exists between these seemingly disparate things is a part of reality.

Consider this: the functor notion is saying that any thing can be put into a list, that any thing can either be present or missing, and so forth. This is a commonality between the concept of list and the concept of possibly-present-thing (an option type). It also allows you to express some very elegant code in a uniform manner.

This is not making the claim that they are the one and the same operation on some deeper level, just noticing the commonality and presenting a uniform interface based on the commonality.


I get the advantages, I'm just uncomfortable with it.


Fair enough.


I guess you can implement fmap in any language with higher-order functions including Python. The point is it is built into Haskell and supported by a number of fundamental types out of the box.


They mention indentation, but you don't have to use their indentation and can use braces and semicolons instead if you prefer, which is what I prefer, too. So, I think the indentation isn't so much of the problem.


"A functor instance lifts a function over some structure to affect only the values within. An applicative (a monoidal functor) leverages a structure and function together while lifting the function over structure. A monad (an applicative functor) creates new structure whilst in the process of lifting a function over existing structure."

I think https://xkcd.com/483/ should apply to programming languages too. ;-)

But seriously, can someone put this in English? Is it really more complicated than "apply a function to every object in a list"?


But "apply a function to every object in a list" isn't the definition that the author is getting at!

Most programmers probably know "map" as being an operation takes a function and a list and returns a list of the results of the function applied to each element of the input list. However, the "map" operation can be generalized to generic types other than lists, where it has different behavior but shares certain properties. In functional programming, the idea of a functor gives a name to this pattern.

The idea is that if you have some generic type `T a` covariant in `a`, there is a function `map : (a -> b) -> T a -> T b` that can "lift" every function `f : a -> b` into a function `map f : T a -> T b`. The lifting operation respects function composition and identity.

However, what if the function has multiple arguments, e.g. `f : a -> (b -> c)`? Then, `map f : T a -> T (b -> c)` An applicative functor lets you convert that `T (b -> c)` into a `T b -> T c`. So, with applicative functors, you can individually map each parameter of a multi-parameter function.

A monad is a functor that can be "flattened." It has an operation `join : T (T a) -> T a`, as well as an operation `return : a -> T a`. A monad can be seen as generalizing the idea of flattening a list, or generalizing async/await.


I really like your description. One thing i don't understand though, is why generalising these operations is considered a good thing?

In my day to day I spend a bit of time writing functional code, and a lot of time reviewing it, and when you generalise in this manner it hides a lot of the details of the algorithmic complexity. Is this operation happening in a future? or is on a list? You could argue that the type signature will let you know, but quite often it's inferred.

Suddenly I find myself needing an IDE just to do code reviews. People make arguments about naming variables and suddenly we're back to using hungarian notation.

I also find it's easy to make mistakes - you do a flatmap instead of a map and the Nones just vanish. Or you're composing so many generic functions that the intent of the code just disappears.

I guess i just wanted to see if it's just me.


> is why generalising these operations is considered a good thing?

> Is this operation happening in a future? or is on a list? You could argue that the type signature will let you know, but quite often it's inferred.

1) You are generalizing the input of a function to multiple types. Many of the functions you'd write wouldn't care whether you're using a List or a Future, they would accept either. It's just like declaring your Java method to take a Collection instead of an ArrayList.

2) You are getting more specific in the output of the function. If you write `filter`, it would be nice to get a `LinkedList` back if you pass in a `LinkedList`. With more common type systems, the best you'll be able to do writing the function once is a return type of `Collection` or `Iterable`.

Your method may perform horribly if a LinkedList is passed in instead of an ArrayList. If your operation really depends on a specific behavior the Functor doesn't specify, you should be using a different type than Functor.


> Many of the functions you'd write wouldn't care whether you're using a List or a Future, they would accept either.

I can't imagine a situation where that would be true, except if you're writing a monad library. You could say that you might want to interchange a list of futures for a future of lists, but that's two interdependent things being swapped not one.


You've made at least two comments now mentioning this idea of interchange, but there's more to all of this than just that. The Haskell standard libraries are full of generic functions that can be applied to many practically useful types as long as those types provide certain basic guarantees.

For example, there is a useful little function

    replicate :: Int -> a -> Seq a
that allows us to build a sequence of n of some value. But what if we don't have a raw value, but instead some Applicative wrapper around it, and instead of just getting a sequence of individually wrapped values, we want that Applicative structure around the sequence? That is, we want some function f that will give us

    f 3 (Just 1) == Just (fromList [1, 1, 1])
but

    f 3 Nothing == Nothing
with a Maybe, and

    f 3 (Right 'x') == Right (fromList "xxx")
but

    f 3 (Left "error message") == Left "error message"
with an Either, and similarly for any other Applicative.

Well, such a function also exists in Data.Sequence:

    replicateA :: Applicative f => Int -> f a -> f (Seq a)
Because it works with any Applicative, it can just as well work out all of the possible outcomes when we roll three dice:

    replicateA 3 [1, 2, 3, 4, 5, 6]
or repeat the action of any Monad three times:

    replicateA 3 $ putStrLn "Hello, world!"
or repeat whatever behaviour some Applicative you haven't even written yet will have next week.

The rabbit hole gets as deep as you like. For example, we can easily compose a variety of other standard functions to count how many ways there are to reach each possible total in that dice example:

    map (head &&& length) $ group $ sort $ fmap sum $ replicateA 3 [1, 2, 3, 4, 5, 6]
    -- [(3,1),(4,3),(5,6),(6,10), ...]
Many of these functions have quite general types, for example sum working on any Foldable and fmap working on any Functor. Because a list is an Applicative and any Applicative is also a Functor, and since the individual results of the replicateA will be Seqs and any Seq is also a Foldable, everything works fine, and the list structure we started with is preserved all the way through.


You can write your own definitions for your types for instances of Functor, Applicative etc. explicitly. Are you worried about the performance when you refer to algorithmic complexity, or what exactly?


Well performance is certainly the main factor, but it's really about the Big O's of a function. https://en.wikipedia.org/wiki/Big_O_notation


Small correction: Join should be T (T a) -> T a


Oops, thanks.


Well, in the case of functors - yes, that's basically it. Except it's also "apply a function to every object in a tree", "apply a function to an optional object (if it's there)", and so on (there's a list of ones built into Haskell's standard library here [1]).

And then the real kicker is - very often stuff can use that general functor idea ("apply a function to whatever objects are inside this wrapping object, whatever it is") instead of being written against a specific, concrete kind of thing. And then you can use that stuff in all kinds of unexpected places, and combine it with other stuff written in the same way, and it all composes together and works harmoniously.

[1] https://hackage.haskell.org/package/base-4.12.0.0/docs/Prelu...


I think it's important to point out that Functor, Applicative, etc. are not only about "containers" like lists and trees which hold a specific set of concrete values but also things like IO actions ("compose this pure function with the result of the readLine action, producing a new action") or even regular functions (where the map operation is just function composition). When other languages import the concept they unfortunately often specialise it to only work on containers.


For things like this I recommend going to wikipedia and looking up "Functor" -- there's some weird dislike of category theory terms but we're all fine with terms like "singleton" or "Factory"

You'll see the definition and be like, "shit that's just a map." We sell each other a bit short.

Likewise for applicative, monoid, monad -- they're mathematically pretty clear and if you squint while assuming a list is a category, you can probably read most of the definitions and get it.

functor -> equivalent of map for lists

monoid -> equivalent of having an identity ([]) and a join (a.concat) for lists

applicative -> equivalent of turning one element into a list

monad -> equivalent of unwrappering list items, applying a function to each, and rewrapping it

This is just for lists, but the terms are more powerful than that and have many variations. The nice thing is because it's universal. You don't have to wonder, "have I done this pattern right," because you test the laws for them. It might also help, for some of the laws, to remember identity and associativity as like from multiplication -- ie 1 * 9 = 9, 3 * 2 = 2 * 3


> applicative -> equivalent of turning one element into a list

This is only one part of the definition of an applicative. Applicatives must also preserve the monoidal operation (product). That is, there is a function `pair : f a -> f b -> f (a * b)`. Then, you can map functions of multiple arguments by turning `(a * b) -> c` into `f (a * b) -> f c` via `map` and passing it the result of the applicative `pair` operation.

> monad -> equivalent of unwrappering list items, applying a function to each, and rewrapping it

I'm not sure how precise or accurate this definition is. I'd say that a monad is about collapsing multiple layers of a functor into one layer.

> This is just for lists, but the terms are more powerful than that and have many variations. The nice thing is because it's universal. You don't have to wonder, "have I done this pattern right," because you test the laws for them. It might also help, for some of the laws, to remember identity and associativity as like from multiplication -- ie 1 * 9 = 9, 3 * 2 = 2 * 3

Yup, I agree that this is a benefit of generalizing these operations based on their common structure.


I actually wrote a monad tutorial last year: https://bytes.yingw787.com/posts/2019/12/06/monads/

After reading about monads...honestly no, there is no way to simplify it. Without understanding the type system, it’s alien.


> Is it really more complicated than "apply a function to every object in a list"?

Not more complicated, but yes more general. Perhaps "apply a function to every object in a data structure," even when that data structure can be kind of a abstract.


One thing that could help you understand is that Haskell has many types which act kind of like containers of primitive values.

One example is `Maybe a` which has a value `Just a`. `a` can have any type, including `String`, `Int`, `Float`, etc. But a function that accepts a `String` can't take `Just String` as an argument.

This is the problem that `Functor` solves. It applies your function to the value inside the container.

`Applicative` applies a function that takes two arguments to the values inside two containers. For example, you have the function: `<> :: String -> String -> String` and you apply it to two `Just String` values, like this:

    ghci> liftA2 (<>) (Just "hello") (Just " world")
    Just "hello world"


From the official documentation:

> Functors: uniform action over a parameterized type, generalizing the map function on lists.

> Is it really more complicated than "apply a function to every object in a list"?

It is defined for more data structures then just a lists. Other than that, not at all.

As far as I can tell, the only reason Haskell has both map and fmap is because "map" came first. When applied to a list, fmap and map are identical.


Any js-er review?


If you mean Javascript to Erlang, probably not for the foreseeable future. I want to learn to ship product for a bit, and hopefully I can talk about that :)


What kind of products are you building?


It’s under wraps right now (mostly because I can’t look people who believed in me in the eye if I fail) but it’s a tiny CRM for software engineers. Feel free to subscribe to my mailing list on the blog for more information, or email me at me@yingw787.com :)


I'm a JS dev that learned Haskell over the past few years. Anything in particular you're curious about?

I guess if I were to mention one thing I really miss in Haskell it's having easy, generic dictionary types a la JSON. Give me some syntax sugar to write a dictionary like JSON objects, where the properties are ordered by insertion (unlike most HashMaps that are unordered or rely on some predefined Key ordering). Also I'd love some better record types.


This page made me permanently write off Haskell:

https://wiki.haskell.org/Prime_numbers

It is so much work, so much effort to develop some very mediocre, very slow, and incomprehensible implementations of an undergraduate level algorithm. I ended up picking CL for the project I was working on (Project Euler) and never looked back.


You are of course entitled to your own opinion, but I would say that’s unfairly harsh towards Haskell. GHC Haskell not only compiles down to a binary, it can also transpile to C—-, both of which can run performantly.

I started off FP with lodash, then moved to Python and built in functions, and I’ve tasted the benefits of applying functional programming in production, and so that’s what makes learning Haskell interesting and worthwhile to me. I’m looking forward to bringing what I’ve learned from Haskell towards the rest of the code I write, even if it’s not in Haskell.


I write Python for a living, and it is an astoundingly productive language. What it gets right is the compromise between functional, object-oriented, and imperative styles. I too have been fascinated by the "pure functional" type languages, but they always fall short when it comes to "real world" projects.


I think it’s definitely hard for practitioners to adjust to FP, especially when under the pressure to ship. I think a great opportunity to look for ways to apply FP would be in small utilities or libraries. I’m planning my next project in Python but if I’m fortunate to ship that I’ll see if I can circle back to Haskell and go from beginner to intermediate by building a small library.


If you're interested in FP, I highly recommend checking out Let Over Lambda. It made me realize that there is actually lot in common between OO and FP.

https://letoverlambda.com/


Weirdly, I never found functional programming Python anyhow enjoyable. It just doesn't seem to have enough facilities to support it, feels like a sort of anti-pattern on it.


You'd probably like ML (especially Ocaml) then. It can do all the crazy FP drugs (including lazy evaluation), but it's also painless to write normal imperative code in.

Syntax is a hair grotty though.


C-- is one of the stages of its compilation to a binary IIRC. I wouldn't say it "can transpile to C--", as if C-- is an optional target which the user then does something else with.


What's developed on that page is not a simple sieve of Eratosthenes. You can of course write a simple prime sieve in Haskell like you would write in any imperative language, and it would be pretty much just as efficient.

The algorithms on that page are for incremental prime sieves. These are far more difficult to implement than a fixed-size sieve. What's more, I would argue that the Haskell implementations on that page are far shorter and easier to understand than the imperative counterparts. Take this version, for example:

    joinT ((x:xs):t) = x : union xs (joinT (pairs t))
      where pairs (xs:ys:t) = union xs ys : pairs t
    
    gaps k s@(x:xs) | k < x     = k:gaps (k+2) s
                    | otherwise =   gaps (k+2) xs
    
    primes = 2 : _Y ((3:) . gaps 5 . joinT . map (\p-> [p*p, p*p+2*p..]))
Compare those 5 lines to any of the algorithms found in:

Pritchard, Paul. ‘Improved Incremental Prime Number Sieves’. In Algorithmic Number Theory, edited by Leonard M. Adleman and Ming-Deh Huang, 280–88. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer, 1994. https://doi.org/10.1007/3-540-58691-1_67.

Sorenson, Jonathan P. ‘Two Compact Incremental Prime Sieves’. LMS Journal of Computation and Mathematics 18, no. 1 (2015): 675–83. https://doi.org/10.1112/S1461157015000194.


You can implement the regular sieve of erathostenes using mutable unboxed vectors if you like. If you did not spend enough time in Haskell to realize that you are in no position to judge it.

On the other hand, I would have used CL as well...


> If you did not spend enough time in Haskell to realize that you are in no position to judge it.

Sadly this will never stop anyone from doing it anyway.

I had a VPE (didn't know Haskell) who tried to read some of the team's Haskell and couldn't grok it despite his many years of software experience. His response? Spout that "people are saying the Haskell codebase is hard to understand" without any indication that he was in fact "people."


"mutable unboxed vectors"? You mean like a C array? How primitive and horrifying!


One fun thing people don't know about Haskell is it lets you get as low-level as C (modulo still having an RTS.) Manually memory management, pointer arithmetic.. it's all there in the standard library!


... and fast!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: