Hacker News new | past | comments | ask | show | jobs | submit login

Well, one example I like when people ask me "what's contravariant good for?" is the following intuition. Suppose we're doing stream processing. We might have a datatype 'Source a' which is a source that produces a stream of a's.

'Source' is an example of a Functor. We can use 'fmap :: Functor f => (a -> b) -> f a -> f b' as '(a -> b) -> Source a -> Source b'. So, if we a function 'a -> b' we can turn a source of a's into a source of b's. Nice.

In such a library we would obviously also want a 'Sink a', this is a datatype that consumes a stream of a's. For example, it might write these a's to disk. Now, clearly it doesn't make sense for 'Sink' to be a functor. Think about it, if we have a sink that writes a's to disk, how would a function 'a -> b' affect it? Sure, we could turn all a's into b's, but then what? We don't know how to do anything with b's.

However, 'Sink' is a Contravariant (Functor). So, let's have a look at that. 'contramap :: Contravariant f => (a -> b) -> f b -> f a', so '(a -> b) -> Sink b -> Sink a'. If we have a 'Sink' that writes 'b' to disk and a function that turns a's into b's we can obviously construct a 'Sink a' that consumes a stream of a's, converts them to b's and passes them to the original 'Sink'.

And then there's a third abstraction not mentioned in the original post. The 'Profunctor', a Profunctor is a type that has two arguments and is contravariant in the first one, while the second is a regular Functor. In other words, if we have 'Pipe a b' this type can be made a Profunctor which comes with 'dimap :: Profunctor p => (a -> b) -> (c -> d) -> p b c -> p a d', which hopefully looks very similar to both Functor and Contravariant. In our stream processing example 'Pipe a b' would correspond with a pipe that consumes a stream of a's and turns it into a stream of b's which we can use to plumb 'Sink' and 'Source' together. We can both contramap it's first type argument to change what we can feed into it, as well as fmap the second type argument to alter what it produces.

These are from the only cases where these classes show up, but I hope they give some relatively easy to follow example on how these classes can capture some common scenario.




Thanks for your reply, I'm going to try to read it and the others carefully.

I wonder whether it's possible to bridge the gap more for people who are used to more conventional programming paradigms. For example,

> 'Source' is an example of a Functor. We can use 'fmap :: Functor f => (a -> b) -> f a -> f b' as '(a -> b) -> Source a -> Source b'. So, if we a function 'a -> b' we can turn a source of a's into a source of b's. Nice.

In python that's just

  itertools.imap(function_a_to_b, stream_of_as)
and many other languages will have a similar construction.

So I immediately hit a block as I'm struggling to understand why I need the notion of a Functor to understand what in the end looks like just lazy mapping of a function over a stream.


>So I immediately hit a block as I'm struggling to understand why I need the notion of a Functor to understand what in the end looks like just lazy mapping of a function over a stream.

What you're calling "lazy mapping a function over a stream" is an example of a functor. To be precise the functor goes from "type a" to "stream of type a" (note that this isn't quite the same as a function since it operates on the level of types).

You don't need to understand Functors to understand how mapping over a stream works, however if you understand mapping over a stream then you can understand any other functor as being similar to mapping over a stream. Now functors are somewhat basic so it's hard to come up with a really nontrivial example, but you could for instance consider:

    generate_b = lambda: function_a_to_b(generate_a())
to be pretty much the same thing as mapping over a stream, even though those functions can't be iterated over and could just be generating random data, or sample some time series etc.

Note that here we're transforming the output, when you start transforming the input you get something contravariant, like when you do something as follows:

    class CoMapped:
        # ... #

        def __index__(self, a):
            return self.object_with_b_index[self.function_a_to_b(a)]
it looks a bit weird to do this in Python though as there's no way to denote types.


> So I immediately hit a block as I'm struggling to understand why I need the notion of a Functor to understand what in the end looks like just lazy mapping of a function over a stream.

You don't, for this one example.

You don't, for any one example.

But if you keep looking, you'll find more and more of these examples of functors embedded in the code you are already writing. You need the notion of functor when you want to abstract over all of these examples, or generalize one of your examples to handle things that it couldn't before.


> So I immediately hit a block as I'm struggling to understand why I need the notion of a Functor to understand what in the end looks like just lazy mapping of a function over a stream.

You don't need to understand Functor to understand "map over stream", no. But it turns out that Functor (and to slightly lesser extent Contravariant and Profunctor) keep showing up everywhere. It's not just stream processing, they show up in writing parsers, data structures, DSLs, software transactional memory...

For example, "computation that produces either an error or a result" and "transforming the result IFF the operation was successful" is also captured by Functor. And so is "perform IO and then transform the result of that IO". And "apply this function to all elements in a datastructure".

So, once you 1) realise that all these cases can be generalised to functors and 2) write down what the lawful behaviour if Functor is (i.e. "the spec"), we can suddenly start writing generic code that works for any Functor. So when I make 'Source' a Functor, my users get all the generic code written for any Functor for free.

Similarly Monoid, Applicative, Monad, and all these other abstractions occur all over the place. And you get this positive feedback cycle. We have generic code that works for any type that's an instance, so if we make new types instances of them, we get all this generic code for free. But the more types are instances, the more incentive there is to write even more generic code using these abstractions. Which in turns encourages ever more instances, etc.

So you never need to "understand" the abstractions for any specific instance/operation. It's just that if you have a lot of use-cases described by the same abstraction, recognising that this abstraction captures all these cases lets you get a lot of code reuse. And not only that, but also a powerful toolkit that you can use for working with basically any other library/code you encounter supporting them.

In Haskell Monoid, Functor, Applicative, Monad, and co provide you a toolbox that you end up able to reuse again and again for tons of different libraries all supporting them. These abstractions didn't gain wide-adoption for the sake of "theoretical elegance" (though they are elegant), they gained wide adoption due to the sheer pragmatic benefit they provide when programming.

As an example, Applicative itself was only invented in 2004, I started learning Haskell in 2007 and by that time about 80-90% of libraries in the ecosystem had started using it. You don't get that kinda adoption if you don't have anything to show for it :)


Thank you for making this more tangible.




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: