Hacker News new | past | comments | ask | show | jobs | submit login
Implement with types, not your brain (2019) (reasonablypolymorphic.com)
192 points by pcr910303 4 months ago | hide | past | favorite | 119 comments




This way of writing Haskell, from my experience, results in write-only code. While it's nice that the compiler helped you figure out which obscure operators to use for your code to actually compile, it sucks for the next person trying to find the bug in your code. And just because the types check out doesn't mean it's free from bugs.

Code is also meant to be read and reasoned about by humans, not just by the compiler.


Rich Hickey once said something (with regards to TDD) like, "We have guard rails on highways, but that doesn't mean we just take our hands off the wheel because the rails will keep us on the road. Guard rails don't know where we actually want to go. We drive the car where we're trying to go, and the rails are only there for when things go wrong."

I think a version of this applies to static types as well. Even under normal circumstances, I've noticed at times my own thoughts getting lazy when refactoring statically-typed code. I don't "load" as much of the program into my brain as I would otherwise; I just change the part I want to change and then fairly thoughtlessly fix all the type errors. I don't think this is a good thing. The OP takes this to an absolute extreme.

Static types are there to document and to catch bugs early; they cannot be used as a complete verification that your code is correct, much less a generative tool for writing logic where "you're not entirely sure how" it works.


"I don't think this is a good thing."

How good a thing it is depends on how right you are when you do it. If your language is statically typed, but not very well, then it's a bad thing. But it's not just abstractly a bad thing because you are a lazy person who didn't choose to do useless cognitive work, it's a bad thing because you may be wrong. To give a specific example, if your type system forcibly adjoins NULL to all pointer types, you can't afford to take your hands off the wheel when dealing with pointer types and neglect that case. You must think about it when using pointer types.

On the other hand, if you take your hands off the wheel for a moment, but you perform only type system manipulations that you have good and correct reasons to believe won't affect the behavior of the underlying code... then what's wrong with that?

I'm guessing you've used the former type of language more than the latter? Me too. I think you've turned the justifiable fear you feel in that sort of language into a heuristic, which makes it easy to then forget the underlying problems. But there's no particular virtue to having to understand more of the program to do a manipulation than having to understand less. Proof: If that was always true, then that recurses on itself until you shouldn't ever make a change until you understand the whole program completely in your head at once (if not the whole system it is running on). Even if that's true, it doesn't matter; we humans don't have the cognitive capacity for that, we must window our understandings.

It isn't just the micro task of refactoring one particular function; it's completely common for me to undertake a type-driven refactoring of code where just the type-driven aspects are plenty to fill my cognitive window. Especially since I'm usually in a looser language and I really have to watch those darned forcibly-adjoined NULLs. If I'm somehow "obligated" to understand more, the result isn't that I'm going to have that understanding, the result is that my hand will be tied and I won't be able to do that refactoring because it'll blow out my cognitive budget.


> we must window our understandings

I'm not arguing with that. I'm only saying that we shouldn't assume "the types are enough", so long as we have more "cognitive window" available. Here's why:

  add     :: Integer -> Integer -> Integer
  add x y =  x + y

  add     :: Integer -> Integer -> Integer
  add x y =  x - y
The type system doesn't have any idea which of these is correct. So if the programmer is "asleep at the wheel", it's still entirely possible for their logic to come out wrong.

All I'm saying is we shouldn't needlessly ignore the actual logic in favor of "letting the compiler write it for us". That's what the OP is expressly advocating for.


I am really torn on the subject. I write physics simulations for research use and will therefore assume a FDTD code that simulates Maxwells equations as an example.

On one hand there is a bunch of minus signs in Maxwells equations and there is nothing except historical nomenclature and experiments that can tell you where they go. Nothing in the type system can know if dB/dt is curl(E) or -curl(E). (Note that dE/dt has the opposite sign.) And if you implement say a finite difference stencil you have a bunch of numerical coefficients like (1/24, -27/24, 27/24, -1/24) that are not straightforward to derive. If you go to NSFD these coefficients will actually come out of the numerical optimization run by another code. The compiler will never be able to type check that.

On the other hand it can be quite useful to have the compiler type check variable for correct SI units, so you don't accidentally add curl(E) to B without multiplying by the time step dt. But that is already quite possible with the type system in C++ that is derived by purists as bad and weak. Maybe it could even add checks that stuff in the new time step has to have a change proportional to dt added to it and prevent me from accessing stuff that is not properly time centered. But then again I have not seen anybody write code like that in Haskell that did anything useful.

Having guardrails is nice, but you can not write the entire program in types. And if you try all you get is the same complexity in a unreadable syntax.


I want to say that the C++ type system is not really weak — it can enforce basically anything, since templates are Turing complete. But most of those possible types are horrifically unergonomic.


That's not what OP is arguing for. Here is the relevant portion that might clear your misconception about this article.

>

I’m not going to say that blindly filling in type holes always works, but I’d say maybe 95% of the time? It’s truly amazing just how far you can get by writing down the right type and making sure you use every variable.

The reason why this works is known as theorems for free, which roughly states that we can infer lots of facts about a type signature (assuming it’s correct.) One of those facts we can infer is often the the only possible implementation. It’s cool as fuck, but you don’t need to understand the paper to use this idea in practice.

One question you might have is “what the heck does it mean for a type to be correct?” Good question! It means your type should be as polymorphic as possible.


It's because `Integer` is an insufficiently specific type to ensure that the value expression is well formed. If you really wanted to, you could prove that `add` is well-formed in Haskell and Haskell-likes: https://github.com/idris-lang/Idris-dev/blob/master/libs/pre...


The author is advocating this method when the functions being derived are sufficiently polymorphic which your add function is not.


Indeed, this is a perfect example of why it only works when your types are sufficiently polymorphic, and how Haskells type system can't always capture all the required logic needed.


For my "standard" language response: In general, I don't actually like using mathematics for this sort of discussion, because on the one hand it all seems so reasonable, and yet, on the other hand, almost nothing we do involves working in spaces as complicated as numbers. You see something similar in "operator overloading"... makes great sense for numbers, but everywhere else it turns into a pain, because we don't really work in numerical spaces.

The vast majority of the time, we're really working with types like

    User -> Permissions -> PermissionType -> Maybe BillingHistory
and a type-based refactoring of "Permissions -> PermissionType" into "PermissionsWithType" isn't going to go wrong, even in looser languages. (Those aren't very Haskell-y types, but I'll keep the notation here for now.) As long as you can assume your code isn't entirely insane, and the permissions code isn't going to spontaneously guess permissions into existence when you go to check something with them, you're going to be generally OK with this sort of type-directed refactoring.

To the extent that you can't actually depend on your code to not do things like magic permissions into existence... well... this is precisely why that's always a bad idea. Don't write code that does that.

For my Haskell response: In Haskell, when you know how to read the type system, you don't have to use a heuristic to guess whether you should slow down.. the type itself is telling you that you need to be more careful. As a reasonably skilled Haskell programmer using type-directed reasoning, you can immediately read off from "Int -> Int -> Int" that you have a more complicated function than "a -> a -> a". The latter is so "simple" there's only two reasonable implementations, whereas that's only the beginning of what your Int function may do. However, when you have "Applicative f => f (a -> b) -> f a -> f b", you actually can know what that is not doing. The polymorphism tells you you don't have to sit there and study it. It can't magic an "a" into existence, or suddenly decide to use "division" on the resulting "b", because it can't. It doesn't know how. You don't need to worry about it doing those things anymore, because it can't.

For my harmonization of the two: In principle, imperative code can almost always magic values into existence, use globals, etc. One of the reasons I can stand writing in looser langauges like Go is that I just don't do those things. In general, when I write code, if you've got a "Permission" and "PermissionCheckable", I do my best to keep it so there's only one sensible way to combine the two. So, when I'm refactoring my code, even in looser strictly-typed languages, I can still pretty much get away with the type-directed approach. (Backstopped by the unit tests I have anyhow.) It is certainly true that you can write, or be forced to deal with, code that is conceptually mismashed where this is more dangerous, but I'd suggest as you get experienced over the years that you learn to stop doing that.


> Static types are there to document and to catch bugs early; they cannot be used as a complete verification that your code is correct, much less a generative tool for writing logic where "you're not entirely sure how" it works.

You might be surprised to learn that this is not only possible but something research is definitely working towards.

In a language like Agda, for example, when there is enough type information available the compiler can fill in the holes for you with the obviously correct implementation.

The idea behind program synthesis drives this even further. Using type level specifications we can automatically derive the programs that meet those specifications. This is useful because the language of types is much more concise than the language of terms. For sufficiently difficult problems it's much easier for humans to reason about complexity in higher-level specifications. Let the computer generate the code!

It's still early on for this kind of technology but projects like Synquid[0] are making good headway.

Dependent-type theory also forms the basis of interactive theorem proving in Coq and Lean[1]. We use something like holes as the basis of proofs. Haskell's typed holes are quite a bit more loose but to me it feels very similar to working with such a system. I propose to Haskell there there exists an expression that satisfies a particular type and then I use the typed holes to fill in my obligations to provide a proof. The hole is my goal and the available objects in scope are my terms. It is a very effective tool for solving hard problems.

[0] https://www.csail.mit.edu/research/synquid-synthesis-liquid-...

[1] https://leanprover.github.io/

update: Links


I do wonder whether there is any reason to believe that generating code from types would ever be easier than writing the code directly. More specifically, I wonder if this:

> This is useful because the language of types is much more concise than the language of terms.

actually holds with sufficiently complex types. For example, would the fully-specified type for quicksort actually be shorter than quicksort?

I don't know that there is any reason to believe so. The best we may be able to hope for is that we'll have less work to do for formal verification, since instead of writing both the proofs and the code, we may be able to only write the proofs and have the code for free. However, given that writing the proofs is, at least currently, much, much harder than writing just the code, I'm not sure this type of development would make more than a dent in the software engineering world.


Let's consider an algorithm that shares state among multiple concurrent threads of execution without coordinating work through a lock.

What is the probability that an expert software engineer could write a correct implementation of this algorithm, I wonder? As their colleague will we be able to review that code and notice if it contains an error? What are the sufficient parameters for an acceptable solution: should it run with 8 threads and 2 state variables?

I don't think we'll see these tools and practices become wide spread but I do see the cost of using them is coming down. I also see the number of cases where systems that aren't safety critical are never the less causing harm to property and people. It's possible that at this intersection we'll see the adoption grow: bringing down the cost of writing software that handles higher degrees of liability could be a useful tool to have.


I'm not sure what you are refering to as "these techniques" - are you referring to full formal specification with dependent types?

If so, then I think the answer so far is: even though few senior SEs could write correct implementations of that algorithm, there are definitely many more than the number of people who can write non-trivial software with complex dependent types in a decent amount of time.

Of course, you can write code in Idris or Agda and be productive, but not if you want to do stuff like actually proving that your sort function produces a sorted permutation of original list and other similar somewhat rich tight bounds.

The more you want to express in your types, the more complex your program becomes. A very nice example is how you would implement matrix multiplication in regular ways (with matrices of numbers) versus how much more work you need to add if you want to track physical quantities (with proper support for different physical quantities for each value in the matrix). It's easy to say {{1, 2}, {3, 4}} can be multiplied by {{5, 6}, {7, 8}}, but much harder to say if {{1m, 2kg}, {3N, 4Pa}} can be multiplied by {{5m, 6N}, {7Pa, 4kg}}.


I think it's harder to find an error in a program than in a proof. At least for proofs that can be verified automatically by a computer. And that scales with the difficulty of the problem. A proof that a program copies files from one place to another is not interesting. However a proof that your algorithm can safely share state without a lock is.

My point here is not that dependent type theory is going to take over the industry and we should all learn to write proofs!

It's more that we can if we want to. Writing a proof with tactics in Lean feels a lot like programming with typed holes in Haskell (it's harder but the experience is similar). With training a motivated programmer can write proofs that verify their designs which provide a strong guarantee of correctness.

Whether types are more expressive than programs... I think they are? That seems to be the whole point of Synquid: you can express complex constraints and proofs in your types that enable it to derive the program for you. Is that easier than simply writing the program?

I think one should consider that writing the correct program is harder. And where getting it right matters a lot I hope that synthesis will one day allow us to derive those programs from their specifications (or at least the parts that matter the most). It would save a lot of the burden of proving that the program we wrote implements the specifications.


In my experience it's much easier. A lot of my API relies on this. I even produce clients in other languages purely based on the API types which act as a form of specification.

Haskell's compiler will generate all the logic for serialization and deserialization, leaving me to only implement the actual business logic.


Serialization and deserialization are probably the lowest hanging fruit for this. Java has them, and it can even serialize objects with behavior.

I think you're thinking of a very shallow kind of auto-generation of code, which doesn't include any kind of behavior beyond calling your API.

The GP though was talking about generating the behavior based on the type specification, which is also doable in principle. For example, you could provide a complex type specification for a sorting function, and have the compiler infer the implementation.

For example, you could declare a function which takes a list of `Ord a` and returns a list of `Ord a` such that the second list is a permutation of the first list and each element in the second list is <= to the next element (you of course need dependent types for this). With this type signature, the system could generate the code for a function which achieves this purpose.

However, I do not believe that it will be easier to write that type signature (especially if you want to add properties like stability or time and space constraints) than it would be to write the actual sorting function yourself and sticking with `Ord a => [a] -> [a]`. I also don't think we will see anything like this in the near future that doesn't rely on a large set of pre-defined functions that the system simply fills in.


That was just a superficial example. If you include generics, template metaprogramming, auto derivations and such then you could achieve much more with code generation. My skills are just not advanced enough yet to fully utilize these in my programs.


> For sufficiently difficult problems it's much easier for humans to reason about complexity in higher-level specifications. Let the computer generate the code!

Yet the humans must still fully consider the higher-level specification. I know that what you describe isn't a compiler per se, but from the perspective of programmer experience it seems analogous. You're not enabling the programmer to "not think", just to think on a more abstracted level. I think the gist of my point still stands; maybe not the final paragraph.


at least with what i have seen of program synthesis, you end up having to write such a specific specification that you are basically defining the implementation (i.e., programming).

dependently typed languages also seem woefully unprepared for real systems programming. they are probably good in the small but are so difficult in the large that the costs outweigh the benefits.


I'm happy that the static type guard rails exist. I mainly work on e-commerce codebases that have been operating for on average more than 5 years, have had a lot of different developers work on them, and are typically large enough (1 million lines of code or more), that I can't keep it all in my head. For this type of work, it's very nice to have static types, as some of the code has tests, some don't, and at least the static type is a sort of enforced guard rail. Ideally, everything would have tests, and most projects I work on we try to move in that direction with various amounts of success. For me, the most challenging changes can be Javascript changes where there are no tests and no types.


Nobody is saying that guard-rails in the form of tests or static types aren't a good thing.


your paraphrasing of rich's thoughts seems like he is missing what type driven development is really about, and that is domain driven development. the idea behind letting types lead the way is not to just blindly rely on static types. it's old news that that is only a first approximation to writing good code. what types in sane languages (like f#'s records, discriminated unions, units of measure, classes, functions, etc.) allow is for you to model your domain in a clear way in code such that your programming becomes much more concise and clear as to what it is doing. the fact that static types try to link up everything is a side effect of that process.


he never played need for speed


I think you are talking about a different domain than the author. The author is talking about writing plumbing code - the parts where everything is reusable and multipurpose. In contrast, you seem to be focusing on the business logic portion of the domain where all complexity is necessary.

In the reusable abstraction space, what you're talking about just doesn't happen. There aren't bugs that need to be hunted down by careful inspection in these things because the methods described in the article are sufficient to catch them. They'll fail to type check or have obvious flaws of the sort -Wall catches.

In the business logic domain, there is a lot more room for errors that can't be caught by those approaches. But code in that domain also can't be written like this in the first place, so it's kind of a strange criticism to say "the code I can't write like this contains bugs that are hard to track down when I try to write it like this."

I can sense one additional objection coming: "If this isn't for business logic, what value is it? I write business programs."

To that, I would answer that most programs are stuffed full of redundant non-business logic, and you don't even realize how bad it is. It took roughly 30 years of c-like languages for people to be fully on-board with the idea that 3-clause for loops are error-prone boilerplate separate from the business logic and worth abstracting out.

If you use Haskell enough you'll realize a lot of your code in most languages is there to fiddle with implementation details instead of working over the actual problem space. A lot of Haskell code takes the form of super-generic abstraction plumbing bits that serve to isolate a common pattern and make sure the edge cases are covered.

Those bits of code are the ones usually called "impossible to understand", which are fully described by their types, and which can be implemented with this approach.


> Those bits of code are the ones usually called "impossible to understand", which are fully described by their types, and which can be implemented with this approach.

Once I let go of needing to understand how a piece of code turns into CPU instructions, I found that this code is actually the easiest to understand!


Even these code that has nothing to do with business logic benefits from being readable, from being written with a purpose in mind. This is so that even when a typed hole suggests something fishy, you can immediately spot it.

This is coming from something who tried that approach and had a bug in production.


Code is also meant to be read and reasoned about by humans, not just by the compiler.

And thinking with types helps you to do that. Satisfying the type checker doesn't guarantee a program is functionally correct but it does reject a large number of obviously incorrect programs from being considered. And with holes it can guide use towards the likely correct program.

From TFA:

> I’m not going to say that blindly filling in type holes always works, but I’d say maybe 95% of the time?

It's a tool. If you don't know how to implement something a good first approximation is understanding it's shape and filling in the holes. I think this is the spirit of what the article is demonstrating.

Writing the program so that it's clear is the next step. The final example, `zoop` is renamed to `foldr` once we realize what we're working with. This is how I often work with Haskell in practice: I know the types and shapes of values I need to produce and I work with the compiler to write an expression that fits the type. Then I rewrite for clarity and intent.


I think to some extent, the preference of some Haskellers to write obscure code is orthogonal to having the types help your development.

Most Haskell code I read in the wild is borderline incomprehensible because of things like poor naming choices. Typically I can rewrite the code to be clearer, but it's a lot of effort.

I think Haskell would benefit from some style guidelines that emphasize that your code ought to be understandable by others. That would at least counteract the natural tendency to believe that terseness is a virtue or a goal.

You can write incomprehensible C too. It's just that most people have stopped doing that.


It's quite common to see one or two letter variables in C too unfortunately, and typically they will have the most unhelpful type like "int" that says very little about it.


I agree that the result, while cool, is not readable. An obvious next step would be to demonstrate how the type-checked lambda-heavy implementation can be refactored into something that also makes sense to a human.

> just because the types check out doesn't mean it's free from bugs

That's certainly true in the general case. In fact, the author gives a type-checked-but-wrong example of "we need an Int, so just return zero."

But in some specific cases, as the author writes, "One of those facts we can infer [from a type signature] is often the the only possible implementation." The article demonstrates that "possible" includes "also makes use of all the values available (or else the signature would be simpler)."


Not just Haskell. I've done a very similar thing in Java which I regret.

The trick is to make sure that as soon as you've written this garbage, you go back, split things up into proper methods with descriptive names. Even things like introducing intermediate variables goes a really long way towards improving readability.

To me, this is a starting point, not the finish line.


It works best if you follow up the compiler assistance with a human-readability pass. Use the techniques in this post to guide you to a solution, which I find immeasurably helpful, then rewrite it by pulling it out piece by piece to make it readable with let/where blocks that describe the various steps.


In my Haskell experience, there's no substitute for Taste :)


As much as I am in favour of readable code, the point of the article is that you do not need to read the function! There is only one sensible thing that a function of that type can do. The type really is, in this case, the best technical explanation for what the code does. (One might want to add comments that help you understand the meaning but the implementation is really irrelevant.)


If you think that about the first example in this blog post, then I think you're looking at the wrong part. This is code within a library that does one specific abstract thing. The term-level isn't important - look at the type signature.

I am familiar with free monads/extensible effects, so I know the idioms at play in that 1-argument function. We're taking an extensible effect, "peeling" State off it (this is an idiom of EEs) and returning the rest of the effects within StateT. I can now this is a way to interop between the State effect in the effect stack & the StateT monad transformer. I don't give 2 shits about the implementation - I can use this to solve my problems just fine.

EDIT: HAHA - I just realized that this function is called "hoist" which is exactly the name I would've given it based on common Haskell idioms. I didn't even read the name - just the type. That's the essence of Haskell :)


Any language with a large enough lexical set or enough meta-programming capabilities can be classified as "write-only" from the point of view of non-experienced users. It's the everlasting battle between learning curve and expressiveness.


> This way of writing Haskell, from my experience, results in write-only code.

That's fine because you can enter zen state and refactor it using compiler hints ;)


In less expressive statically typed functional programming languages this way of writing works better IMO.


I don’t see this as much an effect of haskell so much as a younger culture of coding standards. I’ve certainly found trivially readable complex haskell problems before, but naming and library operators are a massive issue in the community.


what about typed eDSL ? you get aided type fixing but above a human digestible ontology. only asking as I don't have experience in this


This article is a great example of how FP is currently being ruined in the same way OOP was ruined in late 80s. The paradigm had a lot of great ideas, but it became so popular so quickly that people began to fetishize its terminology and tools, while completely forgetting what those tools were meant to accomplish.


Awful comment. The OP, while not pedagogically ideal, demonstrates proof writing along the Curry-Howard correspondence, an extremely deep topic which predates popular functional programming and OOP by decades. To liken this to "tool fetishization" is pure nonsense.


Very true. And you should get really worried once we have FP consultants and executives talk about FP. This seems to happen to a lot of good ideas, be it OOP, Agile or TDD. Ideologues have very compelling messages but they usually destroy what they are promoting. They don’t allow the question “why are we doing this again? Are we on the right track?”


What are the tools "meant to accomplish" exactly?

I've deployed production code built with the exactly techniques in the article. it's doable and in my experience easy (modulo the large learning curve - pay it once, pay it forever)

The (very simple all things considered!) top-level type signature of `Sem (State s ': r) a -> S.StateT s (Sem r) a` is also something I've used in production (different library of the same sort)


I wonder which fraction of the code actually benefits from these techniques though. Certainly I become suspicious from the fact that examples like in this article always tend to be of a very narrow, specific kind.

In my experience, every worthwhile piece of code outside of certain libraries has enough of what you might call "business logic" that these theoretically clean approaches no longer apply to. This has been true across lots of domains, from games, to CRUD, to compilers, to combinatorial optimization algorithms.

I'd go on to argue that the interesting core of all those problem domains was precisely in the areas that couldn't be solved using cute techniques like OP's.


mainstream spoils everything


As a non-Haskell-er, this article is impossible to follow. I have tried to learn Haskell a few times, always unsuccessfully. Haskell code is filled with so many Haskell-specific things. I realize the goal isn't to be an 'easy' language, but jeez.

Aside from the horrendous variable naming, I'm having trouble understanding what the author actually accomplished, and why it's necessary for a tool like this to exist in the first place.

Can anyone translate the problem into a more simplified, non-Haskell-specific version?

> jonk :: (a -> b) -> ((a -> Int) -> Int) -> ((b -> Int) -> Int)

I have no idea what this line means.

To me, it looks like he wants to write a transformer that matches a specific signature, and he uses a tool that tells him which functions can fit the 'hole' he has in the code, based on types. If I'm correct (I'm assuming I'm not), how is this different than intellisense in an IDE that tells you a list of type-valid functions/variables? Why is this a necessary part of the Haskell coding process? My first instinct is that this demonstrates a problem with the language, if something like this is necessary.

Appreciate anyone that tries to explain. I know most of my assumptions are probably wrong.


> I have tried to learn Haskell a few times, always unsuccessfully

I suspect that people wanting to learn Haskell actually want to learn a statically typed functional programming language. And if you also want that language suitable to everyday problems, the kind of things you are dealing with at work, then Haskell is probably not the best one to start with.

I love how Scott Wlaschin presents statically typed functional programming languages as straightforward and pragmatic solutions to everyday problems:

https://www.youtube.com/watch?v=PLFl95c-IiU

https://pragprog.com/book/swdddf/domain-modeling-made-functi...

Once you're comfortable with that (and the way it is presented here really isn't very complex) then Haskell becomes a lot easier to understand.

But Haskell is very distracting because it is much more expressive and complex than than ML dialects for example, so it will be a big learning curve before someone is comfortable reading someone else's Haskell code. For other statically typed functional programming languages (f.i. OCaml/ReasonML, F#, Elm) that is not the case.


5 minute guide to Haskell:

Any time you see a '::', it means 'the thing on the left hand side has the type on the right hand side'. As such, you should parse this line as 'there is a function jonk that has a type '(a -> b) -> ((a -> Int) -> Int) -> ((b -> Int) -> Int)'.

A simple type in Haskell is `Int -> Bool`. This means a function that takes a value of type Int and return a value of type Bool.

Now `Int -> (Int -> Bool)`, means 'a function that takes a value of type Int and returns a function that takes a value of type Int and returns a value of type Bool'. And because currying, this can be also read as 'a function that takes a value of type Int, then a value of type Int, and returns a value of type Bool'. As in, if you first call the function with Int, you'll be getting an `Int -> Bool` in return. Then you call it again with Int, and then you get a Bool. Because of how we define precedence of things, this signature can also be written as `Int -> Int -> Bool`, which makes it a bit more readable. However, if you really want to grok Haskell you need to grok this equivalency early on.

Compare this to `(Int -> Int) -> Bool`. This means 'a function that takes a function that takes a type Int and returns a type Int, and returns a type Bool'. By this point you should see that the type is more readable than my human language descriptions.

One last thing you're missing is the types 'a' and 'b'. You'll notice that these start with a small letter, and that's because they are generic types. It means that in practice this function can be called with whatever type instead of 'a' and 'b' in the declaration, it's just that it must be the same type for all cases of 'a' and 'b'. For a simple example, let's look at a function that takes two Ints, a Bool, and returns the first Int if the boolean is true, otherwise the second. The type signature for this in Haskell would be `foo :: Int -> Int -> Bool -> Int`. However, the function doesn't really care that these are Ints - they might as well be any values. As such, we can define the function signature as `foo :: a -> a -> Bool -> a`. This is is naturally similar to generic in C++, Typescript or Rust.

Armed with this knowledge (albeit maybe explained a bit better - see Learn You A Haskell!) you should be able to read this complex type signature now.


As I interpret it, here is a Java 7 version. With generic types A and B, and the following types defined:

  interface Adapter<S, T> {public T adapt(S s);}
  interface Statistic<T> {public int valueOf(T t);}
  interface Summary<T> {public int summarize(Statistic<T> st);}
we want to implement

  interface Jonk<A, B> {
    public Summary<B> jonk(Adapter<A, B> adapter, Summary<A> summary);
  }
So we start implementing,

  class MyJonk implements Jonk<A, B> {
    public Summary<B> jonk(Adapter<A, B> adapter, Summary<A> summary) {
      // Equivalent to the author's jonk ab aii = _
    }
  }
Looks like we'll need to create a Summary<B>, so let's do that:

  class MyJonk implements Jonk<A, B> {
    public Summary<B> jonk(Adapter<A, B> adapter, Summary<A> summary) {
      return new Summary<B>() {
        public int summarize(Statistic<B> st) {
          // Equivalent to jonk ab aii = \bi -> _
        }
      }
    }
  }
Following the article, notice that there are a lot of signatures returning int around; pick summary rather than st;

  class MyJonk implements Jonk<A, B> {
    public Summary<B> jonk(Adapter<A, B> adapter, Summary<A> summary) {
    return new Summary<B>() {
      public int summarize(Statistic<B> st) {
        return summary.summarize( /* ...? */ );
          // Equivalent to jonk ab aii = \bi -> aii $ _
        }
      }
    }
  }
Skipping some intermediate steps, we end up eventually with the solution which maps to the haskell

  // equivalent to jonk ab aii = \bi -> aii $ \a -> bi $ ab $ a
  class MyJonk implements Jonk<A, B> {
    public Summary<B> jonk(Adapter<A, B> adapter, Summary<A> summary) {
      return new Summary<B>() {
        public int summarize(Statistic<B> st) {
          return summary.summarize(new Statistic<A>() {
            public int valueOf(A a) {
              return st.valueOf(adapter.adapt(a));
            }
          });
        }
      }
    }
  }
EDIT: I'm bad at formatting.


Learn You a Haskell is a good book that explains it well. It’s one of the best intro programming books I’ve read, not just intro Haskell books.

http://learnyouahaskell.com/


I think a lot of people are missing the point of the article. People are complaining about the complexity of the function or similar, which is basically brushing off that some problem domains just are complex.

Putting this into context, the author is demonstrating a function from his effect system library called Polysemy[0], which is a both complex concept and a complex problem to solve.

To achieve this with his library, some complicated plumbing is necessary, and one of those functions happen to be `hoistStateIntoStateT`. The author's point is then summarised perfectly by:

> Gee, that’s complicated! I must be really smart to have written such a function, right? > Wrong! I just have a trick!

The author shows how you can, when posed with just knowing the type you want to achieve, more or less generate/infer the implementation from that.

This is a common situation when stitching functions or parts of your program together, where you have an `A` and want to transform it into a `B`, given the functions you have available.

[0] https://github.com/polysemy-research/polysemy


> The author shows how you can, when posed with just knowing the type you want to achieve, more or less generate/infer the implementation from that.

All this means is that, with a robust enough type system, determining the correct type is exactly as complex as determining the correct implementation.

In some cases, in some languages, expressing the solution in the language of types may be more intuitive than and help guide the language of implementation, but, because they are essentially encoding each other, that's not only as subjective as any question of what is intuitive must be, but also an artifact of the particular type language and implementation language. In a different implementation language, the problem may be as (or more!) intuitive as it is in the type language in use.


This is one of those journeyman techniques that takes a lot of trial and error before you realize how helpful it is. Thanks so much for sharing your hard earned insight. I do get that people are complaining because the example is artificial and leads to a lot of complexity. I get why you did an artificial example, to show how universal and powerful the technique is. I think complaints in the comment section are inevitable.


This is an extreme example of something I’ve found to be quite common: programming “types first”. For another example, I recently read a tutorial on parsing that started by defining `abstract class AstNode`, followed by multiple concrete derived classes. Only after ~100 lines was there any actual parsing logic.

My approach would be the opposite - start with the logic, using built-in types, and maybe refactor to use custom classes later. Perhaps there are downsides to this approach, but it’s how I think about solving the problem. I’ve found this style of programming is easiest in languages that have powerful, flexible built-in types, which seem to be best provided by dynamically-typed languages.


> My approach would be the opposite - start with the logic, using built-in types, and maybe refactor to use custom classes later.

That's how you discover what abstract logic you need. And then you implement that logic types first, because it's abstract, so the types are both enough to describe the functionality and much easier to reason about. And you go and refactor your old code by following the types (or the type errors, if you are more in the let the computer do the work mood).

But I have no idea how to do that in dynamically typed languages. AFAIK, they hinder refactoring as much as they can.


The article does a very good job of illustrating why you should never program this way, if you read between the lines. More precisely, here is the damming piece of evidence:

> Finally we’re finished! A little experimentation will convince you that this zoop thing we just wrote is in fact just foldr! Pretty impressive for just blindly filling in holes, no?

Yes, that's pretty impressive. Instead of just using foldr, you've reimplemented it, and don't even know that! We also have no idea whether this code we've stumbled into is equivalent to foldr in terms of performance. If I knew you write code like this, I would feel compelled during code review to squeeze my brain to try to see whether there is some library function that does what you just did.

Also, it's absurd to claim that these implementations are the only possible ones and that they are correct, when all you have is the pretty weak type system of base Haskell (even though it is probably the strongest type system in regular use!).

Sure, for a->a there is a single possible implementation, but as soon as you have a list, for example, among your parameters, there are now arbitrarily many possible implementations, which differ either in meaning or in performance characteristics.


You are missing the point, which is that if you can blindly follow the type hints of the compiler and still get a non-stupid function when you're joking around like the author is, then in real-life coding situations the type hints will be all the more relevant and reliable.

By the way, this is literally the most common foldr implementation in functional languages, so I have no idea why you would want to do a code review of it, or talk about performances in such a trivial setting, or pretend that you're confused by it so much that you have to refer to a library...


> then in real-life coding situations the type hints will be all the more relevant and reliable

That was exactly my point about the foldr example: in real-world code, you may need a function that perfectly matches foldr's type signature. But following the author's methodology, you'll write out a new implementation of foldr, when what you should actually be doing is calling foldr in that type hole.

This ties back in to my comment about code review. Think about how a patch from someone who applied the author's method would look like: it would contain a lot of in-place implementations that they sort of stumbled into, when in fact they should have reused well-known functions. Someone should then try to review that code to see if some implementation is not actually a well-known function.

Even ghc suggested at some point that he should use maxBound or minBound instead of writing his own thing.

Edit: I should also add that there is no reason to think that foldr is the right implementation of zoop without more context. For example, zoop could be a function just like foldr but that works on the reversed list of `as`. Or one that skips every second element of `as`. If we're just guessing based on type, and don't know what the code is supposed to do, it's easy to mess up.


> If I knew you write code like this, I would feel compelled during code review to squeeze my brain to try to see whether there is some library function that does what you just did.

That's because the author didn't put a type hole before declaring arguments.

  zoop :: (a -> b -> b) -> b -> [a] -> b 
  zoop = _
  <interactive>:10:8: error:
    • Found hole: _ :: (a -> b -> b) -> b -> [a] -> b
      Where: ‘a’, ‘b’ are rigid type variables bound by
               the type signature for:
                 zoop :: forall a b. (a -> b -> b) -> b -> [a] -> b
               at <interactive>:9:1-38
    • In the expression: _
      In an equation for ‘zoop’: zoop = _
    • Relevant bindings include
        zoop :: (a -> b -> b) -> b -> [a] -> b
          (bound at <interactive>:10:1)
      Valid hole fits include
        zoop :: (a -> b -> b) -> b -> [a] -> b
          (bound at <interactive>:10:1)
        foldr :: forall (t :: * -> *) a b.
                 Foldable t =>
                 (a -> b -> b) -> b -> t a -> b
          with foldr @[] @a @b
          (imported from ‘Prelude’
           (and originally defined in ‘Data.Foldable’))

Here, GHC has figured out that, yes, `zoop` is in fact directly equivalent to `foldr`. And it'll do this with all other holes involved in the iterative type hole development process. So, you don't have to worry about whether there is some library function that does what the author did. GHC will do that for you.


> Instead of just using foldr, you've reimplemented it, and don't even know that!

This completely misses the point of the article...

The author demonstrates a general concept (type holes) on a simple piece of code, to explain and motivate how he used it the exact same methodology on a much more complex piece of code, `hoistStateIntoStateT`.


It doesn't. That bit of the article shows that maybe his implementation of hoistStateIntoStateT could be replaced with an already existing function, or a composition thereof.

Also, his entire method breaks down the moment you add an [a] among your parameters (or even return values, possibly), and probably other types as well, and he doesn't even mention anything about that.


I think, it's a means of fighting the eventual complexity of Haskell types. When you cannot write your boilerplate code yourself any longer. But then the solution is also a cause of your problem.


I don't really agree with the article to any extreme, but imo, its a really useful pedagogical tool to "follow the types". The book "functional programming in Scala" uses it a lot, and it really is easy to do in some cases. Like implementing map or flatmap is really demystified by looking at the type signature first and then realizing, given how abstract it is, that there aren't really meaningful alternative implementations for the type signature.

In a more general sense, by utilizing nominal types and defensive programming, you can get a lot of mileage by making there one obvious way to do something. A canonical example being a validation function that takes User and returns ValidUser. If theres only a single function in your codebase that can do that transformation, you can actually use a (debatably simpler) principle of following the types to guide implementation, and in fact, actively prevent the wrong implementation from being written in the first place.

These things are certainly possible to varying degrees with dynamically typed languages, or statically typed OOP languages, but Ive never encountered a paradigm that makes representing invalid program states impossible to the degree that statically typed FP makes possible.


> a->a

Even there, it's not true, right? It's been a while since I learned / used Haskell, but I think something that transforms a { name: 'bob' } to a { name: 'sally' } would fit that bill, right? So, there are many implementations which meet the type signature, but you'd be pretty surprised if your `identity` function did this.


  f :: a -> a
can only be satisfied by the function identity because there are no constraints on the type (a). identity is the lowest common denominator function that can transform an object of any type to another object of the same type.

If you added a constraint on a, for instance:

  f :: Num a => a -> a
Now f can be almost any function which can transform a numeric type back to the same numeric type. [0] describes class Num. So f, with this constraint, can now be negate, abs, signum, or something which combined the first parameter with itself using one of the operators, and so on.

By removing constraints on the type itself, Haskell necessarily restricts the set of operations that can be performed.

[0] https://hackage.haskell.org/package/base-4.12.0.0/docs/Prelu...


The key point making `a -> a` have only a single implementation is that the type `a` is a wildcard that's determined by the caller - it must be possible for the implementation to accept any value of any type. The identity function is the only thing that can do that for all possible values.

If the type were `Int -> Int`, it could do a lot more.


changing a record requires a more specific type than `forall a. a`

the type `forall a . a -> a` does truly have one single (non exception throwing) implementation since there is no information you can glean about your argument other than the fact that it exists.

Once you have more structure, for example in `Person -> Person` or even `forall a . [a] -> [a]`, you have more possible implementations than just the identity function.


That function wouldn't have type a -> a. It would have type MyRecord -> MyRecord.


no, that's a function from `named -> named`. `a -> a` is a function that takes any object, calls its type `a`, and returns an object of the same type.


> I’m not going to say that blindly filling in type holes always works, but I’d say maybe 95% of the time?

I must do a very different type of programming. In my world, I'd say this might work maybe 20% of the time, max.

[Edit: Others on this thread (lkitching and weevie, at least) say that the difference is that the types must be "sufficiently polymorphic". My types almost never are. I suspect that the difference is, the more polymorphic the types, the fewer details of what can be done with each type, and therefore the fewer the options on the ways they can be combined. For example, if my function takes two ints, I can add, subtract, multiply, divide, take the remainder, etc. I can't use this technique to guide that. But if I'm passed in two "a"s, I probably need to be passed in the function to use to combine them as well (they may not have a "+"). Then it's pretty clear - I'm passed in two "a"s, and a function that takes two "a"s and returns an a, and there's really only one way to combine those parts.]


>One question you might have is “what the heck does it mean for a type to be correct?” Good question! It means your type should be as polymorphic as possible.

There's a mathematical sense in which this is correct, but I disagree for practical software. Yes, there's only one function of type (forall a. a -> a), so you can't screw it up if you have a type checker, while you can easily screw up a function of type (BizLogicType -> BizLogicType) since there are going to be many of those. But when reading code, the most polymorphic version of a function struggles from the same problem that using "x" as a variable name does. It's the type level version of the difference between ( x = f(g) ) and ( numBooks = countBooks(library) ).


Takeaway - haskell programmers will be replaced by machines first, js devs probably last. /s


If anyone can tell me what does functions does, I'll rewrite them in JavaScript.


> Finally we’re finished! A little experimentation will convince you that this zoop thing we just wrote is in fact just foldr

    zoop _   b []       = b
    zoop abb b (a : as) = abb a $ zoop abb b as
It's difficult to understand the function because the author deliberately named everything just based on their type to show off this method of autocomplete. This is what the function would look like with the same names as ramda's ReduceRight:

    reduceRight _  acc [] = acc
    reduceRight fn acc (head : tail) =
        fn head $ reduceRight fn acc tail
https://ramdajs.com/docs/#reduceRight


JavaScript already have the reduceRight method on the Array.prototype. But here it is anyway as promised:

    function reduceRight(arr, value, reduceFunc) {
        if(typeof value == "function" && reduceFunc == undefined) {
            reduceFunc = value;
            value = 0;
        }
        
        for(var i=arr.length; i--; i<-1) {
            value = reduceFunc(value, arr[i]);
        }
        
        return value;
    }
    
    assert( reduceRight([1,2,3], (a,b)=>a+b) , 6);
    assert( reduceRight([1,2,3], 1, (a,b)=>a+b) , 7);


Next iteration looks like this. Note that I added another test when I made the update to the function.

    function reduceRight(arr, value, reduceFunc) {
        if(typeof value == "function" && reduceFunc == undefined) {
            reduceFunc = value;
            value = arr.pop();
        }
        
        for(var i=arr.length; i--; i<-1) {
            value = reduceFunc(value, arr[i]);
        }
        
        return value;
    }
    
    assert( reduceRight([1,2,3], (a,b)=>a+b) , 6);
    assert( reduceRight([1,2,3], 1, (a,b)=>a+b) , 7);
    assert( reduceRight([1,2,3], (a,b)=>a-b) , 0);


hoistStateIntoStateT

The function name means nothing, and the implementation is hard to follow.


Are you familiar with Haskell? Or just a programmer in general?

The name doesn't mean nothing. Either you're a Haskeller not familiar with monad transformer idioms, or you're not a Haskeller just shouting your uninformed opinion.

Hoist means something very specific in the monad transformer world, and it really helps explain what this does. And the "StateToStateT" also is pretty literal (and in the types.)

I'd actually argue this name is boring & completely derived from the types.


I agree with you here that this does make sense from the point of view of even an intermediate Haskeller.

However, I also do think that the jargon in Haskell is extremely daunting. I think it's the only language I've used where easily most of the function names you see relate not to your business logic, but abstract types. There's some obvious things like 'map' and 'fold', but the deeper your get into complex types the more it looks like total gibberish (and you end up with lift, bind, hoist, not to mention obscure operators if you're in that sort of codebase). And while most other languages out there are at least somewhat approachable by experienced engineers new to that particular language, Haskell code will basically be ungrokable black magic until you spend a few months full time learning it.

And the worst part is that there's so many styles of Haskell - even if you're experienced, sometimes when jumping into a foreigh codebase you'll be suddenly struck by comonods or some other PL-theory-heavy library you never used before. And that just sucks when you're trying to debug a production bug. To me it basically feels like you're never 'done' learning Haskell.


> And the worst part is that there's so many styles of Haskell - even if you're experienced, sometimes when jumping into a foreigh codebase you'll be suddenly struck by comonods or some other PL-theory-heavy library you never used before. And that just sucks when you're trying to debug a production bug. To me it basically feels like you're never 'done' learning Haskell.

It is true that you have to constantly be learning new idioms, (e)DSLs, libraries, types, etc in Haskell. But in my experience, it isn't so bad.

I was at a job with a production Haskell codebase that many senior engineers deemed unsalvageable and were trying to replace in parallel (I don't know if they ever did tbh.) We had production bugs and I had to debug them (I was on the team supporting the production service.) Instead of wincing at how "ugly" and awkward the code was, I just used techniques like the ones in this blog post. I immediately was able to make an impact in the correctness, availability, and performance of the production service, along with the extensibility and ergonomics of the codebase. All because I got my hands dirty instead of wishing things were perfect. It really wasn't hard at all.


Hey - I thought Haskell forces you write code without bugs! :)


Didn't we all at some point :)


> Either you're a Haskeller not familiar with monad transformer idioms, or you're not a Haskeller just shouting your uninformed opinion.

This comment feels unnecessarily aggressive.


Sorry - I just have very little patience for commenters saying Haskell is "hard to understand" or "doesn't mean anything" when they clearly haven't even passed the prerequisites of understanding.

When you don't have the prerequisites, the best thing to do is to be quiet and/or learn rather than pollute the comments with your unfounded value judgements.


so, you're essentially gatekeeping people's opinions about the code, because they don't understand the core concepts of Haskell?

I would argue that someone should be able to understand what a function does by reading the name regardless of their understanding of the underlying language it is written in.

I've noticed this type of weird expectation of understanding is very prevalent in the FP world, which makes it _harder_ for newcomers to understand code at face value.


> I would argue that someone should be able to understand what a function does by reading the name regardless of their understanding of the underlying language it is written in.

That's a bad litmus test imo and only encourages familiarity. If I used this to form my opinions, I would have peaced out the moment I saw >>= or traverse or foldMap

I am never going to believe everything should be immediately obvious to everyone. Sometimes, you have to buy in and learn some new abstractions instead of just favoring the ones that happened to be presented to you first.


-> everything should be immediately obvious to everyone

that....is not what I said. At all.


> I would argue that someone should be able to understand what a function does by reading the name regardless of their understanding of the underlying language it is written in.

You're saying that if someone doesn't know Haskell, they should be able to understand what functions in Haskell do by their name. Which disqualifies standard names like >>=, hoist, and more.


I'm saying if you don't have a working understanding of some technology, maybe think twice before negatively opining about it on HN.


nope, you're saying people aren't allowed to even have an opinion unless they pass your own criteria beforehand.

sorry, that isn't how most people operate. It would be a lot more constructive to tell them why their opinion is wrong, instead of telling them they aren't allowed to even have the opinion in the first place.


I never said that. My issue was with shouting uninformed opinions in a thread about a blog post by someone who actually has the experience to have such informed opinions.

Sometimes you gotta stop spouting answers and start asking questions.


-> I never said that.

Yes, you did.

-> When you don't have the prerequisites, the best thing to do is to be quiet and/or learn rather than pollute the comments with your unfounded value judgements.

reads as 'Shut up, your opinion doesn't matter'.

I'm done here. If you want to be rude and aggressive to people and turn them off to the concept of functional programming entirely, then that's on you, I suppose.


If you don't know Haskell, your opinion of if Haskell code 1) has a good name or 2) has an understandable implementation doesn't matter much to me, _especially_ on a blog post meant for proficient Haskellers such as this one.

In general, people can be better served lurking or asking questions or making qualified statements instead of blasting their opinion like this:

> The function name means nothing, and the implementation is hard to follow.


I don't know why you're getting downvoted (because of the tone?). As someone writing Haskell that name is as clear and concise as it should be.


Yes definitely the tone. Sadly I don't know how to be nicer when dealing with someone who seems to be projecting their opinion to an audience when they don't have the prerequisites to have that opinion to begin with. (It happens a lot in Haskell HN threads)


I learned some Haskell, and even wrote a couple of small programs and simple pull requests in it. However, I still wouldn't be able to effectively read and maintain such code at a reasonable pace.

There are many reasons to love Haskell. The static typing, prevention of bugs, the whole ethos of "doing things the right way", the beauty of abstractions. But I don't think that I would be able to be really productive in it in a reasonably sized project. And code written in this blog post illustrates this very well.


You shouldn't get downvotes, that's certainly a very common impression to get after writing small programs.

But the ethos of the language is not "doing things the right way", it's "do it first, make it right while it grows". The biggest point of the language is on refactoring.


Does F# offer this 'type hole' thing?

I admit I still like to let a type instance and then fill in its members with calculated values, database reads etc. but since I'm nice and C#rrupted I don't do it the idiomatic/right way, whatever that way is, I let a mutable type instance then modify and return it to whoever called the function.


> (\(s', m') -> fmap swap

I understand beauty is in the eye of the beholder, but this code, along with much of the Haskell code I’ve ever read, looks horrendous. Why overuse non-alpha characters? Nested parens take mental time to unwrap, and backslashes have a lot of meaning outside of a language.


I'd much rather have nested parens than "all the parens taken away" and having to remember and then mentally calculate the precedence order for a bunch of obscure operators, which is what I tend to see in a lot of Haskell code. No doubt it becomes natural after persevering for a while....


For newcomers, the problem is exactly that they don't understand the type system. That's where soft documentation like examples would help a lot. I don't see why they would need to be mutually exclusive.


I don't understand single thing in that code. Should it be so difficult for person from OOPs background?

Disclaimer: Never coded in FP. But this looks hard to understand what it does first place.

Anyway Nevermind.


It doesn't seem to be meant to be easy for someone who isn't familiar with Haskell syntax, if that's what you mean. It doesn't seem like it would be too bad for someone who knows a dialect of ML. Maybe with the addition of a couple hints. I think, though, that, if it were written to be digestible for someone who's only familiar with OOP, it would quickly expand from being a blog post to being a small book.


It seems that there are prerequisites for learning FP. Damn! :(


Or, as it was put back in the 80s (and probably earlier), "Strong typing is for people with weak memories."

https://multicians.org/thvv/realprogs.html


Yeah, but back then, I didn't think I had a weak memory. The intervening decades proved to me that my memory is in fact weak.


No thanks. I like my domain model flexible and at run-time. You'll be better off in the long run.

(Imagining having data for type A(v1) and type A(v2), for example)

If you do static domain modeling, your world remains static. That's not real.


Going from the signature to the code took me less time than scrolling through this article.

I'm not sure asking the compiler to babysit you every turn of the road is a good idea.


I can’t get pass the poor variables naming, though.


That's typically the case when dealing with very abstract functions:

    zoop :: (a -> b -> b) -> b -> [a] -> b
    zoop _   b []       = b
    zoop abb b (a : as) = abb a $ zoop abb b as
Can barely be more precise than that, except for `zoop`, but that is obviously meant to be considered irrelevant to the point of the article.

You will probably run into this in Haskell more than other languages though, because you can write a lot of general and abstract code in Haskell. In most other languages you cannot abstract over the types (higher-kinded types needed), and many still even lack polymorphism.


I think having a "complicated" type system like Haskell's is a good thing, because it lets you model data in your program accurately, and more importantly, lets you offload work from your brain to the compiler.

One question: How well do languages like Haskell work for code with side effects, though? Say I want to write a CRUD web API. How easy is it to work with network requests and database operations, for example, in Haskell?


I get what you're saying, but defining a method from only a type signature, and then finding that it can be implemented by only caring about types, isn't much of an accomplishment. Of course you can define it by filling "type holes"—the method isn't real, so there's nothing it can or would do apart from the types matching up.


What are you saying? The function at the top of this blog post does do real things.

This method is real. You can't literally write all your Haskell this way, but I turn my brain off and play type tetris as a nontrivial part of my workflow pretty often. I rarely _only_ do that, but it's nice to not have to exert conscious mental energy and instead use tetris-like muscle/visual memory to make progress towards a goal.


I'm talking about the `jonk :: (a -> b) -> ((a -> Int) -> Int) -> ((b -> Int) -> Int)` extended example that you work through.

Or take the haskell filter function: `filter :: (a -> Bool) -> [a] -> [a]`. Some googling says that there's no opposite function for filter, but it's present in a lot of languages, so let's pretend there is. The signature would be the same: `filterNot :: (a -> Bool) -> [a] -> [a]`. Just looking at the types, filling type holes, wouldn't distinguish those two functions—the difference isn't in the types, but in their behavior.


That's true (and mostly is due to the concrete nature of Bool) but even so, parametricity does help a lot implementing that function. `forall a.` cuts down on possible implementations, but you still gotta program sometimes :)

In a more dependently-typed language, we can write something like this

    f1 :: (p :: a -> Bool) -> [a] -> [a `suchThat` p a]
    f2 :: (p :: a -> Bool) -> [a] -> [a `suchThat` not (p a)]
and that would make it so f1 couldn't be filterNot and f2 couldn't be filter.

And we get the added benefit of giving the caller of our function the proof that the elements have passed that predicate (maybe they can make use of that downstream)

But even in this case, there's nothing stopping you from just always returning an empty list. Not all theorems come for free!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: