

Monadic lifting in C++ - adamnemecek
http://bannalia.blogspot.com/2014/03/monadic-lifting-in-c.html

======
chas
Lifting like this is one of the most useful ideas I have taken from Haskell,
but I think it's hard to see its utility through the C++. At it's core,
lifting allows functions that don't require a computational context to be
composed with values and functions that have a computational context. For
instance, `unwords . reverse . words` is a function from Strings to Strings
and is used like

    
    
        unwords . reverse . words $ "The quick brown fox" => "fox brown quick The"
    

or

    
    
        (unwords . reverse . words) "The quick brown fox" => "fox brown quick The".
    

This is just a pure deterministic function mapping values to values, it has no
special context. To compose it with a value with a context like strings may
not be present (Maybe String), it can be lifted with fmap (which really would
be more informatively named liftF to keep with the Haskell lift function
naming scheme).

    
    
        fmap (unwords . reverse . words) (Just "The quick brown fox") => Just "fox brown quick The"
        fmap (unwords . reverse . words) Nothing => Nothing
    

It now has failure awareness for free, with no effort by the original function
author. Haskell has an infix synonym for fmap, <$>, that makes this even more
natural:

    
    
        unwords . reverse . words <$> Just "The quick brown fox" => Just "fox brown quick The"
    

This looks very much like the first example:

    
    
       unwords . reverse . words $ "The quick brown fox" => "fox brown quick The"
    

This is really powerful because it means any function with no special context
can now be used with any context that implements Functor (or Applicative for
multi-argument functions). The view that fmap executes functions with no
context in a special context also explains why fmap for lists is map. A
function returning a list can be looked at as a function non-deterministically
returning multiple values. This can be seen when using monadic bind
(flatMap/concatMap) with functions returning lists:

    
    
       [1..5] >>= (\x -> [x-1, x, x+1]) => [0,1,2,1,2,3,2,3,4,3,4,5,4,5,6]
    

Looking at lists like this, a function with only one return value is
deterministic, so if it is given a non-deterministic value, the result should
be the result of applying the deterministic function to each possible non-
deterministic value with no change in the number of possible non-deterministic
values, that is to say that it should be mapped over the list!

    
    
       show  $   1     =>  "1"
       show <$> [1..5] => ["1","2","3","4","5"]
       fmap show [1..5] =>["1","2","3","4","5"]
    

The key idea here is that lifting like this provides a way to easily and
extensibly compose functions with different expectations about the context in
which they will be executing. This idea could be fruitfully applied in many
languages outside of Haskell and I'm excited to see how it interacts with
other parts of C++.

~~~
vertex-four
This explanation here is an example of the reason that many programmers refuse
to touch Haskell, even those that are interested in other functional languages
for practical reasons.

> At it's core, lifting allows functions that don't require a computational
> context to be composed with values and functions that have a computational
> context.

This means _absolutely nothing_ to me. What's a computational context? Why's
it relevant? What does it let me do? What's the equivalent in, say, Python? Is
it a necessary or useful concept in languages that aren't purely functional?

The rest of your post follows along the exact same lines. There's no actual
explanation, just lines of Haskell code and no reasoning for why this is
actually useful for programming in general, rather than simply Haskell
elitism.

When have you applied this abstraction in real-world business code, and how
has it simplified the code? How have you made sure it's readable to a
programmer who may not be as skilled as you?

~~~
chas
I'm sorry that my explanation was so alienating to you. It was intended to be
practical, but clearly I missed that mark for people without a certain amount
of Haskell experience. Is there anything particularly confusing there that you
would like explained?

The reason that I used the phrase "computational context" is because functors
are very general. A Functor can be thought of as something that lets you
translate functions with nothing special about them, just a function from a
value to a value, into functions that operate correctly on non-vanilla values.
Some examples of non-vanilla values (values with a computational context) are
any value in a data structure (this is regular mapping over lists of values
and trees of values), values that depend on external configuration or storing
internal state (more useful for pure FP), operating on streaming data with
functions that don't know about how you are streaming data, operating on data
that might be missing without needing a null check, operating on data that
might be actual data or an error code, operating on the results of a parser.

To give a python example, the lift function would be a decorator that made the
function it was applied to correctly operate on almost any input value you
gave it. If you give it a data structure, it is applied to all values in the
data structure. If you give it a generator, it creates a new generator that
runs the function over each element as it is generated, something like:

    
    
        return (f(x) for x in argument_generator)
    

where f is the function that the decorator was applied to. This isn't as
popular in Python, but .then() in promises/futures is a lifting function that
lifts functions operating on data to functions operating on data that may
asynchronously arrive in the future or fail to arrive at all.

This is very practical because it allows you to write code with very little
repetition or boilerplate. When someone makes a data structure or processing
technique (streaming, parsing, non-determinism), they write the code to
operate on values inside of the data structure or process. This code is now
only located in the definition of lift/map instead of being spread between
every function that wants to work with that data structure, but not take
advantage of anything special about the data structure. This removes
boilerplates from functions and makes them easier to read and understand. In
addition, this makes functions more extensible with less work. Anyone who
comes along with a new data structure or processing technique can use all of
the existing functions they have without having to write a new compatibility
layer between the new data structure or processing technique and their old
functions. Writing lifting functions separates particular data structure or
processing techniques from general functions. A function can be lifted into
many different contexts without any changes. This decouples specific
implementations from general functions.

~~~
vertex-four
Ah, this is more interesting. I see what you're getting at; a lifting function
is something like, in Python:

    
    
        mapF = lambda f: functools.partial(map, f)
    

The thing is, in Python, there's a set of "magic methods" that any object of
mine can implement (or which more commonly are implemented by generator
objects), and Python's built-in iteration tools will then work on them. Using
these is then as simple as:

    
    
        def doAThing(x):
            return foo(bar(xyz(x)))
        
        a = MyIterableClass()
        b = (doAThing(x) for x in a)
        c = (doAnotherThing(self.bluh, x) for x in a)
        doTheNextThing(c)
    

This pattern is reasonably common, popping up all over my code, and seems
fairly reasonable to me. It's a bit more typing, but that's never been a
bother for me.

When I need to implement a lifting function myself, it's pretty obvious that I
do, and it's not something I think particularly deeply about.

I'm just not really sure where the novelty comes from, I suppose. Is this less
common in other languages?

~~~
chas
You are absolutely right that this isn't particularly novel. It generalizes a
pretty wide range of concepts, which makes it hard to talk about accurately
without resorting to jargon.

Iterators and generators are solutions to the same sort of problem but aren't
quite as general. They do a good job of abstracting over streams and data
structures, but I don't think they are as useful for dealing with single
pieces of data with something special about them such as computations that
could fail. It would be interesting to hack __iter__ to cover that case, but
it's not idiomatic python. I think the "special types of data" idea works
better in a statically-typed language so it would be non-pythonic on a couple
of levels.

I use generators extensively in my python code, but I find they tend to infect
programs. Once generators are used in one place, everything that interacts
with that code works better as a generator. In Haskell, it is easier to use
lifting for part of an expression without affecting the style of the rest of
the program.

In addition, Haskell's type system and type inference make it possible to use
these techniques in ways that would be very strange and difficult to do in
Python which I didn't get in to in my original post because I wanted to stay
in somewhat familiar territory. For example, functions are Functors and there
are some interesting patterns that can be abstracted over by taking advantage
of that. That said, if you thought of lifting like this as a strongly-typed
slightly more general __iter__, you could be quite productive with it in
Haskell. This is also useful for other languages because it makes it clear how
much more general the __iter__ interface is and how it can be applied to
things that are not data structures or streams.

------
skybrian
If you're new to the whole concept, Elm has a good example of "lift" used in a
somewhat practical application:

[http://elm-lang.org/learn/What-is-FRP.elm](http://elm-lang.org/learn/What-is-
FRP.elm)

------
reubenmorais
For those of you who like me didn't know about this, operators <<= and >>=
exist, can be overloaded, and do "bitwise shift assignment".

|a <<= b| is the same as |a = a << b|

~~~
jxf
Does |x| mean something specific here in C++ that I'm not aware of, or are you
just using it to bracket the expressions?

~~~
adamnemecek
The latter.

------
eru
Nice to see that C++ can express this pattern. (It is extremely ugly there,
though.)

------
zrb
Is Maybe chaining the same as lifting?

I worked on a toy example for this before.

[http://stackoverflow.com/questions/7690864/haskell-style-
may...](http://stackoverflow.com/questions/7690864/haskell-style-maybe-type-
chaining-in-c11)

[http://stackoverflow.com/questions/9692630/implementing-
hask...](http://stackoverflow.com/questions/9692630/implementing-haskells-
maybe-monad-in-c11)

~~~
coolsunglasses
Speaking as a Haskell user:

Maybe chaining is just sequencing functions against results with bind.

Lifting is a general concept that is applicable to functors, applicatives,
monads, etc.

\--> signifies the result of the computation

    
    
        fmap (+1) [0, 1, 2] --> [1, 2, 3]
        map  (+1) [0, 1, 2] --> [1, 2, 3]
        fmap (+1) (Just 2)  --> Just 3
    

In the above examples, fmap (from the Functor typeclass, polymorphic) and map
(monomorphic function for mapping functions over lists) are _lifting_ the
functions they are passed.

There's a little bit of regrettable redundancy in Haskell's typeclasses, fmap,
liftA, and liftM (functor, applicative, monad) all the same thing.

But liftM, while relevant to monads, isn't >>=, which is bind. bind being
what's relevant to Maybe chaining (just using >>= against Maybe values).

------
zem
useful would be a transpiler that converted some limited subset of haskell to
c++ code using this pattern.

