
Implement with types, not your brain (2019) - pcr910303
https://reasonablypolymorphic.com/blog/typeholes/index.html
======
dang
[https://news.ycombinator.com/item?id=20278496](https://news.ycombinator.com/item?id=20278496)

------
q3k
This way of writing Haskell, from my experience, results in write-only code.
While it's nice that the compiler helped you figure out which obscure
operators to use for your code to actually compile, it sucks for the next
person trying to find the bug in your code. And just because the types check
out doesn't mean it's free from bugs.

Code is also meant to be read and reasoned about by humans, not just by the
compiler.

~~~
_bxg1
Rich Hickey once said something (with regards to TDD) like, "We have guard
rails on highways, but that doesn't mean we just take our hands off the wheel
because the rails will keep us on the road. Guard rails don't know where we
actually want to go. We drive the car where we're trying to go, and the rails
are only there for when things go wrong."

I think a version of this applies to static types as well. Even under normal
circumstances, I've noticed at times my own thoughts getting lazy when
refactoring statically-typed code. I don't "load" as much of the program into
my brain as I would otherwise; I just change the part I want to change and
then fairly thoughtlessly fix all the type errors. I don't think this is a
good thing. The OP takes this to an absolute extreme.

Static types are there to document and to catch bugs early; they cannot be
used as a complete verification that your code is correct, much less a
_generative_ tool for writing logic where "you're not entirely sure how" it
works.

~~~
agentultra
> Static types are there to document and to catch bugs early; they cannot be
> used as a complete verification that your code is correct, much less a
> generative tool for writing logic where "you're not entirely sure how" it
> works.

You might be surprised to learn that this is not only possible but something
research is definitely working towards.

In a language like Agda, for example, when there is enough type information
available the compiler can fill in the holes for you with the obviously
correct implementation.

The idea behind program synthesis drives this even further. Using type level
specifications we can automatically derive the programs that meet those
specifications. This is useful because the language of types is much more
concise than the language of terms. For sufficiently difficult problems it's
much easier for humans to reason about complexity in higher-level
specifications. Let the computer generate the code!

It's still early on for this kind of technology but projects like Synquid[0]
are making good headway.

Dependent-type theory also forms the basis of interactive theorem proving in
Coq and Lean[1]. We use something like holes as the basis of proofs. Haskell's
typed holes are quite a bit more loose but to me it feels very similar to
working with such a system. I propose to Haskell there there exists an
expression that satisfies a particular type and then I use the typed holes to
fill in my obligations to provide a proof. The hole is my goal and the
available objects in scope are my terms. It is a very effective tool for
solving hard problems.

[0] [https://www.csail.mit.edu/research/synquid-synthesis-
liquid-...](https://www.csail.mit.edu/research/synquid-synthesis-liquid-types)

[1] [https://leanprover.github.io/](https://leanprover.github.io/)

 _update_ : Links

~~~
simiones
I do wonder whether there is any reason to believe that generating code from
types would ever be easier than writing the code directly. More specifically,
I wonder if this:

> This is useful because the language of types is much more concise than the
> language of terms.

actually holds with sufficiently complex types. For example, would the fully-
specified type for quicksort actually be shorter than quicksort?

I don't know that there is any reason to believe so. The best we may be able
to hope for is that we'll have less work to do for formal verification, since
instead of writing both the proofs and the code, we may be able to only write
the proofs and have the code for free. However, given that writing the proofs
is, at least currently, much, much harder than writing just the code, I'm not
sure this type of development would make more than a dent in the software
engineering world.

~~~
agentultra
Let's consider an algorithm that shares state among multiple concurrent
threads of execution without coordinating work through a lock.

What is the probability that an expert software engineer could write a correct
implementation of this algorithm, I wonder? As their colleague will we be able
to review that code and notice if it contains an error? What are the
sufficient parameters for an acceptable solution: _should it run with 8
threads and 2 state variables?_

I don't think we'll see these tools and practices become wide spread but I do
see the cost of using them is coming down. I also see the number of cases
where systems that aren't safety critical are never the less causing harm to
property and people. It's possible that at this intersection we'll see the
adoption grow: bringing down the cost of writing software that handles higher
degrees of liability could be a useful tool to have.

~~~
simiones
I'm not sure what you are refering to as "these techniques" \- are you
referring to full formal specification with dependent types?

If so, then I think the answer so far is: even though few senior SEs could
write correct implementations of that algorithm, there are definitely many
more than the number of people who can write non-trivial software with complex
dependent types in a decent amount of time.

Of course, you can write code in Idris or Agda and be productive, but not if
you want to do stuff like actually proving that your sort function produces a
sorted permutation of original list and other similar somewhat rich tight
bounds.

The more you want to express in your types, the more complex your program
becomes. A very nice example is how you would implement matrix multiplication
in regular ways (with matrices of numbers) versus how much more work you need
to add if you want to track physical quantities (with proper support for
different physical quantities for each value in the matrix). It's easy to say
{{1, 2}, {3, 4}} can be multiplied by {{5, 6}, {7, 8}}, but much harder to say
if {{1m, 2kg}, {3N, 4Pa}} can be multiplied by {{5m, 6N}, {7Pa, 4kg}}.

~~~
agentultra
I think it's harder to find an error in a program than in a proof. At least
for proofs that can be verified automatically by a computer. And that scales
with the difficulty of the problem. A proof that a program copies files from
one place to another is not interesting. However a proof that your algorithm
can safely share state without a lock is.

My point here is _not_ that dependent type theory is going to take over the
industry and we should all learn to write proofs!

It's more that we _can_ if we want to. Writing a proof with tactics in Lean
feels a lot like programming with typed holes in Haskell (it's harder but the
experience is similar). With training a motivated programmer can write proofs
that verify their designs which provide a strong guarantee of correctness.

Whether types are more expressive than programs... I think they are? That
seems to be the whole point of Synquid: you can express complex constraints
and proofs in your types that enable it to derive the program for you. Is that
_easier_ than simply writing the program?

I think one should consider that writing the _correct_ program is harder. And
where getting it right matters a lot I hope that synthesis will one day allow
us to derive those programs from their specifications (or at least the parts
that matter the most). It would save a lot of the burden of proving that the
program we wrote implements the specifications.

------
gambler
This article is a great example of how FP is currently being ruined in the
same way OOP was ruined in late 80s. The paradigm had a lot of great ideas,
but it became so popular so quickly that people began to fetishize its
terminology and tools, while completely forgetting what those tools were meant
to accomplish.

~~~
whateveracct
What are the tools "meant to accomplish" exactly?

I've deployed production code built with the exactly techniques in the
article. it's doable and in my experience easy (modulo the large learning
curve - pay it once, pay it forever)

The (very simple all things considered!) top-level type signature of `Sem
(State s ': r) a -> S.StateT s (Sem r) a` is also something I've used in
production (different library of the same sort)

~~~
atq2119
I wonder which fraction of the code actually benefits from these techniques
though. Certainly I become suspicious from the fact that examples like in this
article always tend to be of a very narrow, specific kind.

In my experience, every worthwhile piece of code outside of certain libraries
has enough of what you might call "business logic" that these theoretically
clean approaches no longer apply to. This has been true across lots of
domains, from games, to CRUD, to compilers, to combinatorial optimization
algorithms.

I'd go on to argue that the interesting core of all those problem domains was
precisely in the areas that _couldn 't_ be solved using cute techniques like
OP's.

------
ArchReaper
As a non-Haskell-er, this article is impossible to follow. I have tried to
learn Haskell a few times, always unsuccessfully. Haskell code is filled with
so many Haskell-specific things. I realize the goal isn't to be an 'easy'
language, but jeez.

Aside from the horrendous variable naming, I'm having trouble understanding
what the author actually accomplished, and why it's necessary for a tool like
this to exist in the first place.

Can anyone translate the problem into a more simplified, non-Haskell-specific
version?

> jonk :: (a -> b) -> ((a -> Int) -> Int) -> ((b -> Int) -> Int)

I have no idea what this line means.

To me, it looks like he wants to write a transformer that matches a specific
signature, and he uses a tool that tells him which functions can fit the
'hole' he has in the code, based on types. If I'm correct (I'm assuming I'm
not), how is this different than intellisense in an IDE that tells you a list
of type-valid functions/variables? Why is this a necessary part of the Haskell
coding process? My first instinct is that this demonstrates a problem with the
language, if something like this is necessary.

Appreciate anyone that tries to explain. I know most of my assumptions are
probably wrong.

~~~
eterps
> I have tried to learn Haskell a few times, always unsuccessfully

I suspect that people wanting to learn Haskell actually want to learn a
statically typed functional programming language. And if you also want that
language suitable to everyday problems, the kind of things you are dealing
with at work, then Haskell is probably not the best one to start with.

I love how Scott Wlaschin presents statically typed functional programming
languages as straightforward and pragmatic solutions to everyday problems:

[https://www.youtube.com/watch?v=PLFl95c-IiU](https://www.youtube.com/watch?v=PLFl95c-IiU)

[https://pragprog.com/book/swdddf/domain-modeling-made-
functi...](https://pragprog.com/book/swdddf/domain-modeling-made-functional)

Once you're comfortable with that (and the way it is presented here really
isn't very complex) then Haskell becomes a lot easier to understand.

But Haskell is very distracting because it is much more expressive and complex
than than ML dialects for example, so it will be a big learning curve before
someone is comfortable reading someone else's Haskell code. For other
statically typed functional programming languages (f.i. OCaml/ReasonML, F#,
Elm) that is not the case.

------
Tehnix
I think a lot of people are missing the point of the article. People are
complaining about the complexity of the function or similar, which is
basically brushing off that some problem domains just are complex.

Putting this into context, the author is demonstrating a function from his
effect system library called Polysemy[0], which is a both complex concept and
a complex problem to solve.

To achieve this with his library, some complicated plumbing is necessary, and
one of those functions happen to be `hoistStateIntoStateT`. The author's point
is then summarised perfectly by:

> Gee, that’s complicated! I must be really smart to have written such a
> function, right? > Wrong! I just have a trick!

The author shows how you can, when posed with just knowing the type you want
to achieve, more or less generate/infer the implementation from that.

This is a common situation when stitching functions or parts of your program
together, where you have an `A` and want to transform it into a `B`, given the
functions you have available.

[0] [https://github.com/polysemy-
research/polysemy](https://github.com/polysemy-research/polysemy)

~~~
dragonwriter
> The author shows how you can, when posed with just knowing the type you want
> to achieve, more or less generate/infer the implementation from that.

All this means is that, with a robust enough type system, determining the
correct type is exactly as complex as determining the correct implementation.

In some cases, in some languages, expressing the solution in the language of
types may be more intuitive than and help guide the language of
implementation, but, because they are essentially encoding each other, that's
not only as subjective as any question of what is intuitive must be, but also
an artifact of the particular type language and implementation language. In a
different implementation language, the problem may be as (or more!) intuitive
as it is in the type language in use.

------
riazrizvi
This is one of those journeyman techniques that takes a lot of trial and error
before you realize how helpful it is. Thanks so much for sharing your hard
earned insight. I do get that people are complaining because the example is
artificial and leads to a lot of complexity. I get why you did an artificial
example, to show how universal and powerful the technique is. I think
complaints in the comment section are inevitable.

------
pansa2
This is an extreme example of something I’ve found to be quite common:
programming “types first”. For another example, I recently read a tutorial on
parsing that started by defining `abstract class AstNode`, followed by
multiple concrete derived classes. Only after ~100 lines was there any actual
parsing logic.

My approach would be the opposite - start with the logic, using built-in
types, and _maybe_ refactor to use custom classes later. Perhaps there are
downsides to this approach, but it’s how I think about solving the problem.
I’ve found this style of programming is easiest in languages that have
powerful, flexible built-in types, which seem to be best provided by
dynamically-typed languages.

~~~
marcosdumay
> My approach would be the opposite - start with the logic, using built-in
> types, and maybe refactor to use custom classes later.

That's how you discover what abstract logic you need. And then you implement
that logic types first, because it's abstract, so the types are both enough to
describe the functionality and much easier to reason about. And you go and
refactor your old code by following the types (or the type errors, if you are
more in the let the computer do the work mood).

But I have no idea how to do that in dynamically typed languages. AFAIK, they
hinder refactoring as much as they can.

------
tsimionescu
The article does a very good job of illustrating why you should never program
this way, if you read between the lines. More precisely, here is the damming
piece of evidence:

> Finally we’re finished! A little experimentation will convince you that this
> zoop thing we just wrote is in fact just foldr! Pretty impressive for just
> blindly filling in holes, no?

Yes, that's pretty impressive. Instead of just using foldr, you've
reimplemented it, and don't even know that! We also have no idea whether this
code we've stumbled into is equivalent to foldr in terms of performance. If I
knew you write code like this, I would feel compelled during code review to
squeeze my brain to try to see whether there is some library function that
does what you just did.

Also, it's absurd to claim that these implementations are the only possible
ones and that they are correct, when all you have is the pretty weak type
system of base Haskell (even though it is probably the strongest type system
in regular use!).

Sure, for a->a there is a single possible implementation, but as soon as you
have a list, for example, among your parameters, there are now arbitrarily
many possible implementations, which differ either in meaning or in
performance characteristics.

~~~
christophilus
> a->a

Even there, it's not true, right? It's been a while since I learned / used
Haskell, but I think something that transforms a { name: 'bob' } to a { name:
'sally' } would fit that bill, right? So, there are many implementations which
meet the type signature, but you'd be pretty surprised if your `identity`
function did this.

~~~
Jtsummers

      f :: a -> a
    

can only be satisfied by the function _identity_ because there are no
constraints on the type ( _a_ ). _identity_ is the lowest common denominator
function that can transform an object of any type to another object of the
same type.

If you added a constraint on _a_ , for instance:

    
    
      f :: Num a => a -> a
    

Now _f_ can be almost any function which can transform a numeric type back to
the same numeric type. [0] describes _class Num_. So _f_ , with this
constraint, can now be _negate_ , _abs_ , _signum_ , or something which
combined the first parameter with itself using one of the operators, and so
on.

By removing constraints on the type itself, Haskell necessarily restricts the
set of operations that can be performed.

[0]
[https://hackage.haskell.org/package/base-4.12.0.0/docs/Prelu...](https://hackage.haskell.org/package/base-4.12.0.0/docs/Prelude.html#t:Num)

------
AnimalMuppet
> I’m not going to say that blindly filling in type holes always works, but
> I’d say maybe 95% of the time?

I must do a very different type of programming. In my world, I'd say this
might work maybe 20% of the time, max.

[Edit: Others on this thread (lkitching and weevie, at least) say that the
difference is that the types must be "sufficiently polymorphic". My types
almost never are. I suspect that the difference is, the more polymorphic the
types, the fewer details of what can be done with each type, and therefore the
fewer the options on the ways they can be combined. For example, if my
function takes two ints, I can add, subtract, multiply, divide, take the
remainder, etc. I can't use this technique to guide that. But if I'm passed in
two "a"s, I probably need to be passed in the function to use to combine them
as well (they may not _have_ a "+"). Then it's pretty clear - I'm passed in
two "a"s, and a function that takes two "a"s and returns an a, and there's
really only one way to combine those parts.]

------
6gvONxR4sf7o
>One question you might have is “what the heck does it mean for a type to be
correct?” Good question! It means your type should be as polymorphic as
possible.

There's a mathematical sense in which this is correct, but I disagree for
practical software. Yes, there's only one function of type (forall a. a -> a),
so you can't screw it up if you have a type checker, while you can easily
screw up a function of type (BizLogicType -> BizLogicType) since there are
going to be many of those. But when reading code, the most polymorphic version
of a function struggles from the same problem that using "x" as a variable
name does. It's the type level version of the difference between ( x = f(g) )
and ( numBooks = countBooks(library) ).

------
mirekrusin
Takeaway - haskell programmers will be replaced by machines first, js devs
probably last. /s

------
z3t4
If anyone can tell me what does functions does, I'll rewrite them in
JavaScript.

~~~
rovolo
> Finally we’re finished! A little experimentation will convince you that this
> zoop thing we just wrote is in fact just foldr
    
    
        zoop _   b []       = b
        zoop abb b (a : as) = abb a $ zoop abb b as
    

It's difficult to understand the function because the author deliberately
named everything just based on their type to show off this method of
autocomplete. This is what the function would look like with the same names as
ramda's ReduceRight:

    
    
        reduceRight _  acc [] = acc
        reduceRight fn acc (head : tail) =
            fn head $ reduceRight fn acc tail
    

[https://ramdajs.com/docs/#reduceRight](https://ramdajs.com/docs/#reduceRight)

~~~
z3t4
JavaScript already have the reduceRight method on the Array.prototype. But
here it is anyway as promised:

    
    
        function reduceRight(arr, value, reduceFunc) {
            if(typeof value == "function" && reduceFunc == undefined) {
                reduceFunc = value;
                value = 0;
            }
            
            for(var i=arr.length; i--; i<-1) {
                value = reduceFunc(value, arr[i]);
            }
            
            return value;
        }
        
        assert( reduceRight([1,2,3], (a,b)=>a+b) , 6);
        assert( reduceRight([1,2,3], 1, (a,b)=>a+b) , 7);

~~~
z3t4
Next iteration looks like this. Note that I added another test when I made the
update to the function.

    
    
        function reduceRight(arr, value, reduceFunc) {
            if(typeof value == "function" && reduceFunc == undefined) {
                reduceFunc = value;
                value = arr.pop();
            }
            
            for(var i=arr.length; i--; i<-1) {
                value = reduceFunc(value, arr[i]);
            }
            
            return value;
        }
        
        assert( reduceRight([1,2,3], (a,b)=>a+b) , 6);
        assert( reduceRight([1,2,3], 1, (a,b)=>a+b) , 7);
        assert( reduceRight([1,2,3], (a,b)=>a-b) , 0);

------
hcarvalhoalves
hoistStateIntoStateT

The function name means nothing, and the implementation is hard to follow.

~~~
whateveracct
Are you familiar with Haskell? Or just a programmer in general?

The name doesn't mean nothing. Either you're a Haskeller not familiar with
monad transformer idioms, or you're not a Haskeller just shouting your
uninformed opinion.

Hoist means something very specific in the monad transformer world, and it
really helps explain what this does. And the "StateToStateT" also is pretty
literal (and in the types.)

I'd actually argue this name is boring & completely derived from the types.

~~~
q3k
I agree with you here that this does make sense from the point of view of even
an intermediate Haskeller.

However, I also do think that the jargon in Haskell is extremely daunting. I
think it's the only language I've used where easily most of the function names
you see relate not to your business logic, but abstract types. There's some
obvious things like 'map' and 'fold', but the deeper your get into complex
types the more it looks like total gibberish (and you end up with lift, bind,
hoist, not to mention obscure operators if you're in that sort of codebase).
And while most other languages out there are at least somewhat approachable by
experienced engineers new to that particular language, Haskell code will
basically be ungrokable black magic until you spend a few months full time
learning it.

And the worst part is that there's so many styles of Haskell - even if you're
experienced, sometimes when jumping into a foreigh codebase you'll be suddenly
struck by comonods or some other PL-theory-heavy library you never used
before. And that just sucks when you're trying to debug a production bug. To
me it basically feels like you're never 'done' learning Haskell.

~~~
whateveracct
> And the worst part is that there's so many styles of Haskell - even if
> you're experienced, sometimes when jumping into a foreigh codebase you'll be
> suddenly struck by comonods or some other PL-theory-heavy library you never
> used before. And that just sucks when you're trying to debug a production
> bug. To me it basically feels like you're never 'done' learning Haskell.

It is true that you have to constantly be learning new idioms, (e)DSLs,
libraries, types, etc in Haskell. But in my experience, it isn't so bad.

I was at a job with a production Haskell codebase that many senior engineers
deemed unsalvageable and were trying to replace in parallel (I don't know if
they ever did tbh.) We had production bugs and I had to debug them (I was on
the team supporting the production service.) Instead of wincing at how "ugly"
and awkward the code was, I just used techniques like the ones in this blog
post. I immediately was able to make an impact in the correctness,
availability, and performance of the production service, along with the
extensibility and ergonomics of the codebase. All because I got my hands dirty
instead of wishing things were perfect. It really wasn't hard at all.

~~~
p1esk
Hey - I thought Haskell forces you write code without bugs! :)

~~~
whateveracct
Didn't we all at some point :)

------
golergka
I learned some Haskell, and even wrote a couple of small programs and simple
pull requests in it. However, I still wouldn't be able to effectively read and
maintain such code at a reasonable pace.

There are many reasons to love Haskell. The static typing, prevention of bugs,
the whole ethos of "doing things the right way", the beauty of abstractions.
But I don't think that I would be able to be really productive in it in a
reasonably sized project. And code written in this blog post illustrates this
very well.

~~~
marcosdumay
You shouldn't get downvotes, that's certainly a very common impression to get
after writing small programs.

But the ethos of the language is not "doing things the right way", it's "do it
first, make it right while it grows". The biggest point of the language is on
refactoring.

------
myu701
Does F# offer this 'type hole' thing?

I admit I still like to let a type instance and then fill in its members with
calculated values, database reads etc. but since I'm nice and C#rrupted I
don't do it the idiomatic/right way, whatever that way is, I let a mutable
type instance then modify and return it to whoever called the function.

------
3fe9a03ccd14ca5
> _(\\(s ', m') -> fmap swap_

I understand beauty is in the eye of the beholder, but this code, along with
much of the Haskell code I’ve ever read, looks horrendous. Why overuse non-
alpha characters? Nested parens take mental time to unwrap, and backslashes
have a lot of meaning outside of a language.

~~~
anentropic
I'd much rather have nested parens than "all the parens taken away" and having
to remember and then mentally calculate the precedence order for a bunch of
obscure operators, which is what I tend to see in a lot of Haskell code. No
doubt it becomes natural after persevering for a while....

------
hopia
For newcomers, the problem is exactly that they don't understand the type
system. That's where soft documentation like examples would help a lot. I
don't see why they would need to be mutually exclusive.

------
ameyv
I don't understand single thing in that code. Should it be so difficult for
person from OOPs background?

Disclaimer: Never coded in FP. But this looks hard to understand what it does
first place.

Anyway Nevermind.

~~~
mumblemumble
It doesn't seem to be meant to be easy for someone who isn't familiar with
Haskell syntax, if that's what you mean. It doesn't seem like it would be
_too_ bad for someone who knows a dialect of ML. Maybe with the addition of a
couple hints. I think, though, that, if it were written to be digestible for
someone who's only familiar with OOP, it would quickly expand from being a
blog post to being a small book.

~~~
ameyv
It seems that there are prerequisites for learning FP. Damn! :(

------
jfengel
Or, as it was put back in the 80s (and probably earlier), "Strong typing is
for people with weak memories."

[https://multicians.org/thvv/realprogs.html](https://multicians.org/thvv/realprogs.html)

~~~
AnimalMuppet
Yeah, but back then, I didn't think I had a weak memory. The intervening
decades proved to me that my memory is in fact weak.

------
beders
No thanks. I like my domain model flexible and at run-time. You'll be better
off in the long run.

(Imagining having data for type A(v1) and type A(v2), for example)

If you do static domain modeling, your world remains static. That's not real.

------
gwenzek
Going from the signature to the code took me less time than scrolling through
this article.

I'm not sure asking the compiler to babysit you every turn of the road is a
good idea.

------
seek3r00
I can’t get pass the poor variables naming, though.

~~~
Tehnix
That's typically the case when dealing with very abstract functions:

    
    
        zoop :: (a -> b -> b) -> b -> [a] -> b
        zoop _   b []       = b
        zoop abb b (a : as) = abb a $ zoop abb b as
    

Can barely be more precise than that, except for `zoop`, but that is obviously
meant to be considered irrelevant to the point of the article.

You will probably run into this in Haskell more than other languages though,
because you can write a lot of general and abstract code in Haskell. In most
other languages you cannot abstract over the types (higher-kinded types
needed), and many still even lack polymorphism.

------
veeralpatel979
I think having a "complicated" type system like Haskell's is a good thing,
because it lets you model data in your program accurately, and more
importantly, lets you offload work from your brain to the compiler.

One question: How well do languages like Haskell work for code with side
effects, though? Say I want to write a CRUD web API. How easy is it to work
with network requests and database operations, for example, in Haskell?

------
mcphage
I get what you're saying, but defining a method from only a type signature,
and then finding that it can be implemented by only caring about types, isn't
much of an accomplishment. Of course you can define it by filling "type
holes"—the method isn't real, so there's nothing it can or would do apart from
the types matching up.

~~~
whateveracct
What are you saying? The function at the top of this blog post does do real
things.

This method is real. You can't literally write all your Haskell this way, but
I turn my brain off and play type tetris as a nontrivial part of my workflow
pretty often. I rarely _only_ do that, but it's nice to not have to exert
conscious mental energy and instead use tetris-like muscle/visual memory to
make progress towards a goal.

~~~
mcphage
I'm talking about the `jonk :: (a -> b) -> ((a -> Int) -> Int) -> ((b -> Int)
-> Int)` extended example that you work through.

Or take the haskell filter function: `filter :: (a -> Bool) -> [a] -> [a]`.
Some googling says that there's no opposite function for filter, but it's
present in a lot of languages, so let's pretend there is. The signature would
be the same: `filterNot :: (a -> Bool) -> [a] -> [a]`. Just looking at the
types, filling type holes, wouldn't distinguish those two functions—the
difference isn't in the types, but in their behavior.

~~~
whateveracct
That's true (and mostly is due to the concrete nature of Bool) but even so,
parametricity does help a lot implementing that function. `forall a.` cuts
down on possible implementations, but you still gotta program sometimes :)

In a more dependently-typed language, we can write something like this

    
    
        f1 :: (p :: a -> Bool) -> [a] -> [a `suchThat` p a]
        f2 :: (p :: a -> Bool) -> [a] -> [a `suchThat` not (p a)]
    

and that would make it so f1 couldn't be filterNot and f2 couldn't be filter.

And we get the added benefit of giving the caller of our function the proof
that the elements have passed that predicate (maybe they can make use of that
downstream)

But even in this case, there's nothing stopping you from just always returning
an empty list. Not all theorems come for free!

