While quite confusing to many, myself included, I would've loved to hear more about the types vs terms distinction, and overall about what's possible with type level programming. That's something fairly unique among programming languages after all.
I'd certainly be interested in reading more of the author's thoughts if he continues down the path of using Haskell to solve problems.
Python is still my strongest language, and I’m planning on shipping product for a bit after this, so I will be using that for the next few months. I’ll do my best to circle back to Haskell, I love the community and see its potential in what I want to do. Subscribe to my email list if you want to stay updated!
type(type) is type
evaluates to True.
And that is not permitted in mathematical type theory that the Curry-Howard correspondence applies to.
Comment here: https://lobste.rs/s/ovebeq/pythonista_s_review_haskell#c_gz0...
(That is, the dynamic-or-static choice is a product type, not a sum type!)
Not really. "Dynamic type" just means an object that carries around a description of its structure (or more commonly, a pointer to a description). You can do the same thing in many languages with static types. (Including Haskell. GHC has special support for this pattern.)
i.e. Haskell includes the unit type ()
Bob Harper uses "unitype" which is a pretty strong argument for the name's validity
There is no "undefined" value in pure Haskell. There are expressions that have no value, due to a runtime exception (error, undefined, division by zero, memory allocation failure) or non-termination (unbounded recursion). Either way, evaluating the expression does not produce a value for the remainder of the program to operate on—you can't write a (pure) program like `if isUndefined expr then x else y` since—assuming isUndefined is a pure function and doesn't ignore its argument—the whole program is undefined if expr is undefined.
It will never converge towards being easy to handle. The load doesn't come from being inexperienced, it comes from keeping the big picture in your head (all your types and their interactions) instead of iterative thinking.
I barely keep anything in my head most of the time when programming in haskell. It's kind of related to the "emphasis on structure" point from the article. All of a type's "interactions" are just functions. There aren't any second-order things to consider due to referential transparency.
Functions that actually work on the data structures are generally guaranteed to be able to work on the said data. This way you don't need to constantly be mindful of what the data looks like, the compiler does that for you.
An empty string is equivalent to an empty list 
If you want a null use the maybe monad.
I suspect this kind of question would be better asked on the Haskell IRC channel or similar rather than trawling stack overflow for insights. If someone has the relevant missing insight they can quickly point it out and save a lot of confusion.
In C you also have a char type and you also have single-quoted char literals, and the third line is a compile-time error in C too for exactly the same reason, because a char literal represents a char, not a list of anything, and so it can't be empty in the way that a string can be empty.
From the post:
> Python doesn't distinguish between Char and String types with single and double quotes, so empty char is empty string is an empty list.
This is basically right, but the subtle misunderstanding here is that there is no such thing an empty char or indeed a char of any kind in Python, the closest you have is chr() and ord() to get integers out of strings. Indexing a string just gives you another string of length one, there's no char type in Python. If you wanted to implement cons, car, and cdr in Python such that (cons (car "abc") (cdr "abc")) = "abc" you wouldn't be able to do it in the natural way, because the type that (car "abc") needs to have doesn't exist in the language. Strings in Python are almost a list type but the elements of that list aren't representable.
So, coming from Python, the misunderstanding about why there's no "base case" for char as there is for string is understandable, but it doesn't really have much to do with Haskell or with any kind of runtime error handling.
>Once you understand that, this confusion makes about as much sense as expecting an integer to be "empty".
My comment was in reference to the above. An empty number does make sense because of the of 1/0. There is nothing wrong in including a null in a set if there are operations that are illegal. Such operations can't return something so you can use a null to represent the absence of something. This is entirely different from an empty container type.
>So, coming from Python, the misunderstanding about why there's no "base case" for char as there is for string is understandable
The author is looking for a null value not an empty container type or "base case". I quote: "Haskell has no null definition of type Char,"
Basically he's looking for the equivalent of "None" in haskell. He has slight confusion in thinking that an empty container type or empty string is equivalent to a "None."
Because he literally said he's looking for a null he is indeed talking about all things associated with null.
The closest thing to null in Haskell is the maybe monad which is like null but without a runtime exception.
His confusion arises because he feels that if in haskell you can do this:
stringfunction (x:xs) = ...
stringfunction  = ...
charfunction x = ...
charfunction null = ...
I'm saying that similar functionality can be achieved with:
charfunction (Just x) = ...
charfunction Nothing = ...
I'm pointing out the different semantics of single-quoted literals as the source of the confusion, with "empty char" being directly mentioned in the post as something that exists in Python, which, as of course you know, it doesn't.
If the confusion was about how to add a nothing value to a type, why did it arise specifically in regards to Char, and not, say, Integer? The question "why are the char types different between Haskell and Python in this way" is best answered by pointing out that Python doesn't have a char type at all, and all of these things are strings.
Of course, you're entirely correct that you can use a sum type to handle nulls.
I illustrated earlier because of the similarity between char and str, he believed that because of recursive functions on str has base case , he believed that char needed one too.
Sure you can!
>>> car = lambda x: x
>>> cdr = lambda x: x[1:]
>>> cons = lambda x, y: x + y
>>> cons (car("abc"), cdr("abc"))
For a correct implementation of cons, the following gives a list '("a" "b") of two strings:
(cons "a" (cons "b" ()))
In other words, if (cons "a" "bc") gives you "abc", then it's not cons.
*** Exception: Prelude.head: empty list
• In Java forgetting to check for null will crash your program.
• In C, forgetting to check for NULL will lead to undefined behavior.
• In Haskell, you either have to (a) check for Nothing, or (b) explicitly say "crash the program if this is Nothing" by using a function like fromJust or head, which internally does (a). This is always an improvement over C, and often an improvement over Java.
Side note: I like Rust's convention of consistently naming these might-crash-your-program methods "unwrap" and "expect".
I get the choice of why it wasn't done with haskell even though I disagree with it.
Yes, I'm aware of that 1/0 returns an exception.
I'm saying that it makes sense to use nulls because 1/0 is an undefined operation. Therefore to represent the output of 1/0 with a null makes sense. So the existence of nulls makes sense. That is not haskell code I'm typing up there. "1/0 = undefined" is not code.
If you read my reply you will see that my statement isn't referring to an exception caused by 1/0. It's actually referring to a null exception which is completely different.
I am explicitly saying that the "null" in haskell is part of a sum type (the maybe monad) and in that case the "null" or nothing will be handled with no run time errors. You cannot have a null exception in haskell.
Never did I say Haskell has no runtime errors, and 1/0 is not an exception caused by a null. Please analyze and reread my statement.
Nothing to do with null per se.
Anyway, the real lesson is that even if your language has sane defaults that force you to check that your list isn't empty, or that your pointers aren't null, a bad library function like List.head can hide that check away from you and crash anyway, bringing you back to square one in terms of the type system preventing unexpected crashes.
So, then you're not familiar with C/C++, Java, C#, Pascal/Delphi, Ada - to say the least.
Well, yes, strictly speaking you can do just about anything in just about any Turing complete language if you work hard enough. However, expressing patterns like that with the powerful, general semantics and the elegant syntax of the Haskell version is not something that is offered by any of the other languages you mentioned.
It was perhaps an unfortunate choice of example, because many languages do have first class support for some sort of array/list type and some sort of optional/nullable type and so might offer a reasonably tidy implementation of this specific case. However, in Haskell, the exact same technique would apply just as much to any choices of Functors. Indeed, you can often write generic algorithms that work with any choices of Functors (or Applicatives, Foldables, Monoids, etc, depending on what minimal functionality is required for your algorithm to make sense). Likewise, there are very many generic algorithms already available in libraries that can be used with both the standard instances of those typeclasses and new ones you write yourself. This is the big win for Haskell's type and typeclass system, or what the original author refers to as its emphasis on structure.
They're trying to say that Java doesn't support writing a single function like so:
<T<A> extends Iterable<A>, A, B> T<B> fmap(T<A> iterable, Function<A,B> mapper)
I'm not so sure fmap would make semantically as much sense in something like Java as it does in FP.
You say that, but actually fmap doesn't exist in OCaml (or I assume any other common MLs). Of course you can embed a language and type system within it that does have it (https://blog.janestreet.com/generic-mapping-and-folding-in-o...) but that's different.
OCaml still has currying, algebraic data types, "implicit" returns, tail call optimization, etc., and it encourages recursion instead of loops wherever applicable.
-- zs' = [Just 2, Nothing, Just 4]
zs' = (fmap . fmap) (+1) [Just 1, Nothing, Just 3]
Nullable!int zs = [2.nullable, Nullable!int(), 4.nullable];
// no partial application, and
// separate template parameters and runtime parameters
// so (fmap . fmap) (+1) is one function
Nullable!int zs2 = map!(apply!(i => i + 1))(zs).array;
(fmap . fmap) (+1) [Just 1, Nothing, Just 3]
(fmap . fmap) (+1) Just [1, 3],
(list_map . option_map) (+1)
(option_map . list_map) (+1)
The usefulness of this is arguable. In my biased opinion as someone who favours OCaml (which lacks the typeclasses that allow this) over Haskell, I think it's minimal. While it's definitely useful to recognise that lists, options and promises are examples of the same pattern, I don't this generalisation has many benefits in code (at least in MLs where you don't need this pattern just to print things).
And I have argued it on D, and we very deliberately did not go with the range semantics. Optional types are not ranges unless you're compulsively hunting for forced commonality. Might as well treat any value as a range of 1.
For instance, the list type `[a]` and the option type `Maybe a` are both functors. The associated type variable is `a`. This type variable can be manipulated by using `map` to apply some function `f : a -> b` over values of both `[a]` and `Maybe a`. Applying `f` using `map` over such values would yield values of new types, namely `[b]` and `Maybe b`.
This implies that `map f` is a general way of expressing the application of function `f` to values of any functor type. The notion of functor types is therefore supposed to represent any kind of abstract containment relationship. In other words, any time you have the situation of an arbitrary type being wrapped in some kind of "wrapper" (list of..., set of..., range of..., optionally present..., etc), you are dealing with a functor type.
The functor abstraction allows you to apply ordinary functions to such values in a uniform way, regardless of which particular kind of container or wrapper we are dealing with.
Consider this: the functor notion is saying that any thing can be put into a list, that any thing can either be present or missing, and so forth. This is a commonality between the concept of list and the concept of possibly-present-thing (an option type). It also allows you to express some very elegant code in a uniform manner.
This is not making the claim that they are the one and the same operation on some deeper level, just noticing the commonality and presenting a uniform interface based on the commonality.
I think https://xkcd.com/483/ should apply to programming languages too. ;-)
But seriously, can someone put this in English? Is it really more complicated than "apply a function to every object in a list"?
Most programmers probably know "map" as being an operation takes a function and a list and returns a list of the results of the function applied to each element of the input list. However, the "map" operation can be generalized to generic types other than lists, where it has different behavior but shares certain properties. In functional programming, the idea of a functor gives a name to this pattern.
The idea is that if you have some generic type `T a` covariant in `a`, there is a function `map : (a -> b) -> T a -> T b` that can "lift" every function `f : a -> b` into a function `map f : T a -> T b`. The lifting operation respects function composition and identity.
However, what if the function has multiple arguments, e.g. `f : a -> (b -> c)`? Then, `map f : T a -> T (b -> c)` An applicative functor lets you convert that `T (b -> c)` into a `T b -> T c`. So, with applicative functors, you can individually map each parameter of a multi-parameter function.
A monad is a functor that can be "flattened." It has an operation `join : T (T a) -> T a`, as well as an operation `return : a -> T a`. A monad can be seen as generalizing the idea of flattening a list, or generalizing async/await.
In my day to day I spend a bit of time writing functional code, and a lot of time reviewing it, and when you generalise in this manner it hides a lot of the details of the algorithmic complexity. Is this operation happening in a future? or is on a list? You could argue that the type signature will let you know, but quite often it's inferred.
Suddenly I find myself needing an IDE just to do code reviews. People make arguments about naming variables and suddenly we're back to using hungarian notation.
I also find it's easy to make mistakes - you do a flatmap instead of a map and the Nones just vanish. Or you're composing so many generic functions that the intent of the code just disappears.
I guess i just wanted to see if it's just me.
> Is this operation happening in a future? or is on a list? You could argue that the type signature will let you know, but quite often it's inferred.
1) You are generalizing the input of a function to multiple types. Many of the functions you'd write wouldn't care whether you're using a List or a Future, they would accept either. It's just like declaring your Java method to take a Collection instead of an ArrayList.
2) You are getting more specific in the output of the function. If you write `filter`, it would be nice to get a `LinkedList` back if you pass in a `LinkedList`. With more common type systems, the best you'll be able to do writing the function once is a return type of `Collection` or `Iterable`.
Your method may perform horribly if a LinkedList is passed in instead of an ArrayList. If your operation really depends on a specific behavior the Functor doesn't specify, you should be using a different type than Functor.
I can't imagine a situation where that would be true, except if you're writing a monad library. You could say that you might want to interchange a list of futures for a future of lists, but that's two interdependent things being swapped not one.
For example, there is a useful little function
replicate :: Int -> a -> Seq a
f 3 (Just 1) == Just (fromList [1, 1, 1])
f 3 Nothing == Nothing
f 3 (Right 'x') == Right (fromList "xxx")
f 3 (Left "error message") == Left "error message"
Well, such a function also exists in Data.Sequence:
replicateA :: Applicative f => Int -> f a -> f (Seq a)
replicateA 3 [1, 2, 3, 4, 5, 6]
replicateA 3 $ putStrLn "Hello, world!"
The rabbit hole gets as deep as you like. For example, we can easily compose a variety of other standard functions to count how many ways there are to reach each possible total in that dice example:
map (head &&& length) $ group $ sort $ fmap sum $ replicateA 3 [1, 2, 3, 4, 5, 6]
-- [(3,1),(4,3),(5,6),(6,10), ...]
And then the real kicker is - very often stuff can use that general functor idea ("apply a function to whatever objects are inside this wrapping object, whatever it is") instead of being written against a specific, concrete kind of thing. And then you can use that stuff in all kinds of unexpected places, and combine it with other stuff written in the same way, and it all composes together and works harmoniously.
You'll see the definition and be like, "shit that's just a map." We sell each other a bit short.
Likewise for applicative, monoid, monad -- they're mathematically pretty clear and if you squint while assuming a list is a category, you can probably read most of the definitions and get it.
functor -> equivalent of map for lists
monoid -> equivalent of having an identity () and a join (a.concat) for lists
applicative -> equivalent of turning one element into a list
monad -> equivalent of unwrappering list items, applying a function to each, and rewrapping it
This is just for lists, but the terms are more powerful than that and have many variations. The nice thing is because it's universal. You don't have to wonder, "have I done this pattern right," because you test the laws for them. It might also help, for some of the laws, to remember identity and associativity as like from multiplication -- ie 1 * 9 = 9, 3 * 2 = 2 * 3
This is only one part of the definition of an applicative. Applicatives must also preserve the monoidal operation (product). That is, there is a function `pair : f a -> f b -> f (a * b)`. Then, you can map functions of multiple arguments by turning `(a * b) -> c` into `f (a * b) -> f c` via `map` and passing it the result of the applicative `pair` operation.
> monad -> equivalent of unwrappering list items, applying a function to each, and rewrapping it
I'm not sure how precise or accurate this definition is. I'd say that a monad is about collapsing multiple layers of a functor into one layer.
> This is just for lists, but the terms are more powerful than that and have many variations. The nice thing is because it's universal. You don't have to wonder, "have I done this pattern right," because you test the laws for them. It might also help, for some of the laws, to remember identity and associativity as like from multiplication -- ie 1 * 9 = 9, 3 * 2 = 2 * 3
Yup, I agree that this is a benefit of generalizing these operations based on their common structure.
After reading about monads...honestly no, there is no way to simplify it. Without understanding the type system, it’s alien.
Not more complicated, but yes more general. Perhaps "apply a function to every object in a data structure," even when that data structure can be kind of a abstract.
One example is `Maybe a` which has a value `Just a`. `a` can have any type, including `String`, `Int`, `Float`, etc. But a function that accepts a `String` can't take `Just String` as an argument.
This is the problem that `Functor` solves. It applies your function to the value inside the container.
`Applicative` applies a function that takes two arguments to the values inside two containers. For example, you have the function: `<> :: String -> String -> String` and you apply it to two `Just String` values, like this:
ghci> liftA2 (<>) (Just "hello") (Just " world")
Just "hello world"
> Functors: uniform action over a parameterized type, generalizing the map function on lists.
> Is it really more complicated than "apply a function to every object in a list"?
It is defined for more data structures then just a lists. Other than that, not at all.
As far as I can tell, the only reason Haskell has both map and fmap is because "map" came first. When applied to a list, fmap and map are identical.
I guess if I were to mention one thing I really miss in Haskell it's having easy, generic dictionary types a la JSON. Give me some syntax sugar to write a dictionary like JSON objects, where the properties are ordered by insertion (unlike most HashMaps that are unordered or rely on some predefined Key ordering). Also I'd love some better record types.
It is so much work, so much effort to develop some very mediocre, very slow, and incomprehensible implementations of an undergraduate level algorithm. I ended up picking CL for the project I was working on (Project Euler) and never looked back.
I started off FP with lodash, then moved to Python and built in functions, and I’ve tasted the benefits of applying functional programming in production, and so that’s what makes learning Haskell interesting and worthwhile to me. I’m looking forward to bringing what I’ve learned from Haskell towards the rest of the code I write, even if it’s not in Haskell.
Syntax is a hair grotty though.
The algorithms on that page are for incremental prime sieves. These are far more difficult to implement than a fixed-size sieve. What's more, I would argue that the Haskell implementations on that page are far shorter and easier to understand than the imperative counterparts. Take this version, for example:
joinT ((x:xs):t) = x : union xs (joinT (pairs t))
where pairs (xs:ys:t) = union xs ys : pairs t
gaps k s@(x:xs) | k < x = k:gaps (k+2) s
| otherwise = gaps (k+2) xs
primes = 2 : _Y ((3:) . gaps 5 . joinT . map (\p-> [p*p, p*p+2*p..]))
Pritchard, Paul. ‘Improved Incremental Prime Number Sieves’. In Algorithmic Number Theory, edited by Leonard M. Adleman and Ming-Deh Huang, 280–88. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer, 1994. https://doi.org/10.1007/3-540-58691-1_67.
Sorenson, Jonathan P. ‘Two Compact Incremental Prime Sieves’. LMS Journal of Computation and Mathematics 18, no. 1 (2015): 675–83. https://doi.org/10.1112/S1461157015000194.
On the other hand, I would have used CL as well...
Sadly this will never stop anyone from doing it anyway.
I had a VPE (didn't know Haskell) who tried to read some of the team's Haskell and couldn't grok it despite his many years of software experience. His response? Spout that "people are saying the Haskell codebase is hard to understand" without any indication that he was in fact "people."