
Understanding contravariance - allenleein
https://typeclasses.com/contravariance
======
mjpuser
I found this difficult to follow since it requires you to learn Haskell, but
luckily typeclasses.com is a paid service to learn Haskell.

~~~
YorkshireSeason
If you speak Scala, I recommend that you familiarise yourself with Scala's
implicits. The latter are a generalisation of type classes. You can implement
the latter using the former, see e.g. [1]. If you speak ML/OCaml/F#, then
Oleg's [2] might also be easily accessible.

[1] B. C. Oliveira, A. Moors, and M. Odersky. Type classes as objects and
implicits.
[https://ropas.snu.ac.kr/~bruno/papers/TypeClasses.pdf](https://ropas.snu.ac.kr/~bruno/papers/TypeClasses.pdf)

[2] O. Kieselyov, Implementing, and Understanding Type Classes.
[http://okmij.org/ftp/Computation/typeclass.html](http://okmij.org/ftp/Computation/typeclass.html)

------
ajross
Unpopular opinion: co/contravariant representations represent exactly the
point at which formal typesafety starts hurting instead of helping as an
engineering practice.

In all circumstances where you find the need to worry about this stuff to get
your real-world code to build, your project would have been better served by
an environment that just let you do trick with runtime validation, or even an
unsafe typecast. The resulting code is cleaner, simpler, easier to maintain
and more straightforward to evolve than the "correct" madness that results
from proper functional analysis.

I'll take my downvotes, thanks.

~~~
munificent
Counter-opinion.

I work on a language, Dart, where all generics are covariant. In Dart 1,
invalid uses of covariance were not checked at runtime and were silently
ignored. When that leads to wrong behavior down the road, it's profoundly
confusing. Not checking this and sacrificing safety, also ties your hands when
it comes to compile time optimization because you can longer take any of your
type annotations seriously.

In Dart 2, we've moved to a sound type system, which involves checking these
misuses at runtime. I have personally gone through and fixed hundreds of these
runtime failures. Others at Google have done the same. It's grueling,
difficult work. Figuring out where an object was constructed before it
eventually wound up somewhere in a covariant position is not easy.

I would _much_ rather have static control over variance even with the
additional complexity if causes, and I've heard similar requests from many
users.

~~~
jblow
You don’t have a debug allocator that just tells you what file and line made
the allocation? Ouch.

~~~
munificent
We have a lot of different execution environments (compiled to JS with debug
compiler, compiled to JS with optimizing compiler, JIT VM, AoT VM, bytecode
interpreter, Android, iOS, etc.) so that makes it a little more complex.

We do have allocation tracking in some of those, but that doesn't always get
you as much as you'd expect. Discovering that, say, "oh, this list was
deserialized by the JSON API" still means it could have come from dozens of
different regions of the program.

------
cnowacek
Conveniently, this link:
[https://www.pointfree.co/episodes/ep14-contravariance](https://www.pointfree.co/episodes/ep14-contravariance)
was also on the front page of HN. It explores contravariance a bit more and
provides some practical examples in Swift.

------
techno_modus
What does the property 'contravariant' or 'covariant' belong to: function,
functor, class, type class, data type, function or method argument, function
or method return value, generic type/collection or something else?

~~~
hinkley
I’m a little surprised you only got two answers, because I’m pretty sure the
answer depends on how you look at the problem. Mine is this:

A function has variance because the types of its inputs or outputs are in a
hierarchy.

An object type system (or I suppose even a functor system?) with generics
might leverage variance to make sure that the behavior of its own methods is
consistent with the Substitution Principle (LSP).

And many people will tldr this into variance being the tool to give you a type
system that is LSP compatible. But it’s all about the operations, consuming
and emitting types.

~~~
mping
I think like this too. Would only like to add that I believe a type parameter
is *variant in relation to a function/method, not just by itself. That's why
some people say "appears in a covariant position " and so on.

------
Myrmornis
Are there some good examples of problems for which these sorts of programming
tools are particularly helpful, or yield a particularly elegant solution, or
is their role more academic?

~~~
merijnv
Well, one example I like when people ask me "what's contravariant good for?"
is the following intuition. Suppose we're doing stream processing. We might
have a datatype 'Source a' which is a source that produces a stream of a's.

'Source' is an example of a Functor. We can use 'fmap :: Functor f => (a -> b)
-> f a -> f b' as '(a -> b) -> Source a -> Source b'. So, if we a function 'a
-> b' we can turn a source of a's into a source of b's. Nice.

In such a library we would obviously also want a 'Sink a', this is a datatype
that consumes a stream of a's. For example, it might write these a's to disk.
Now, clearly it doesn't make sense for 'Sink' to be a functor. Think about it,
if we have a sink that writes a's to disk, how would a function 'a -> b'
affect it? Sure, we could turn all a's into b's, but then what? We don't know
how to do anything with b's.

However, 'Sink' _is_ a Contravariant (Functor). So, let's have a look at that.
'contramap :: Contravariant f => (a -> b) -> f b -> f a', so '(a -> b) -> Sink
b -> Sink a'. If we have a 'Sink' that writes 'b' to disk and a function that
turns a's into b's we can obviously construct a 'Sink a' that consumes a
stream of a's, converts them to b's and passes them to the original 'Sink'.

And then there's a third abstraction not mentioned in the original post. The
'Profunctor', a Profunctor is a type that has two arguments and is
contravariant in the first one, while the second is a regular Functor. In
other words, if we have 'Pipe a b' this type can be made a Profunctor which
comes with 'dimap :: Profunctor p => (a -> b) -> (c -> d) -> p b c -> p a d',
which hopefully looks very similar to both Functor and Contravariant. In our
stream processing example 'Pipe a b' would correspond with a pipe that
consumes a stream of a's and turns it into a stream of b's which we can use to
plumb 'Sink' and 'Source' together. We can both contramap it's first type
argument to change what we can feed into it, as well as fmap the second type
argument to alter what it produces.

These are from the only cases where these classes show up, but I hope they
give some relatively easy to follow example on how these classes can capture
some common scenario.

~~~
Myrmornis
Thanks for your reply, I'm going to try to read it and the others carefully.

I wonder whether it's possible to bridge the gap more for people who are used
to more conventional programming paradigms. For example,

> 'Source' is an example of a Functor. We can use 'fmap :: Functor f => (a ->
> b) -> f a -> f b' as '(a -> b) -> Source a -> Source b'. So, if we a
> function 'a -> b' we can turn a source of a's into a source of b's. Nice.

In python that's just

    
    
      itertools.imap(function_a_to_b, stream_of_as)
    

and many other languages will have a similar construction.

So I immediately hit a block as I'm struggling to understand why I need the
notion of a Functor to understand what in the end looks like just lazy mapping
of a function over a stream.

~~~
contravariant
>So I immediately hit a block as I'm struggling to understand why I need the
notion of a Functor to understand what in the end looks like just lazy mapping
of a function over a stream.

What you're calling "lazy mapping a function over a stream" is an example of a
functor. To be precise the functor goes from "type a" to "stream of type a"
(note that this isn't quite the same as a function since it operates on the
level of types).

You don't need to understand Functors to understand how mapping over a stream
works, however if you understand mapping over a stream then you can understand
_any other functor_ as being similar to mapping over a stream. Now functors
are somewhat basic so it's hard to come up with a really nontrivial example,
but you could for instance consider:

    
    
        generate_b = lambda: function_a_to_b(generate_a())
    

to be pretty much the same thing as mapping over a stream, even though those
functions can't be iterated over and could just be generating random data, or
sample some time series etc.

Note that here we're transforming the output, when you start transforming the
input you get something contravariant, like when you do something as follows:

    
    
        class CoMapped:
            # ... #
    
            def __index__(self, a):
                return self.object_with_b_index[self.function_a_to_b(a)]
    

it looks a bit weird to do this in Python though as there's no way to denote
types.

