Correct but in order for an operator to be overloadable in C++, it must have a meaning in standard C++ as well. The comment was pointing out that it's surprising that >>= is an operator in standard C++.
In the above examples, fmap (from the Functor typeclass, polymorphic) and map (monomorphic function for mapping functions over lists) are lifting the functions they are passed.
There's a little bit of regrettable redundancy in Haskell's typeclasses, fmap, liftA, and liftM (functor, applicative, monad) all the same thing.
But liftM, while relevant to monads, isn't >>=, which is bind. bind being what's relevant to Maybe chaining (just using >>= against Maybe values).
Lifting like this is one of the most useful ideas I have taken from Haskell, but I think it's hard to see its utility through the C++. At it's core, lifting allows functions that don't require a computational context to be composed with values and functions that have a computational context. For instance, `unwords . reverse . words` is a function from Strings to Strings and is used like
unwords . reverse . words $ "The quick brown fox" => "fox brown quick The"
or
(unwords . reverse . words) "The quick brown fox" => "fox brown quick The".
This is just a pure deterministic function mapping values to values, it has no special context. To compose it with a value with a context like strings may not be present (Maybe String), it can be lifted with fmap (which really would be more informatively named liftF to keep with the Haskell lift function naming scheme).
fmap (unwords . reverse . words) (Just "The quick brown fox") => Just "fox brown quick The"
fmap (unwords . reverse . words) Nothing => Nothing
It now has failure awareness for free, with no effort by the original function author. Haskell has an infix synonym for fmap, <$>, that makes this even more natural:
unwords . reverse . words <$> Just "The quick brown fox" => Just "fox brown quick The"
This looks very much like the first example:
unwords . reverse . words $ "The quick brown fox" => "fox brown quick The"
This is really powerful because it means any function with no special context can now be used with any context that implements Functor (or Applicative for multi-argument functions). The view that fmap executes functions with no context in a special context also explains why fmap for lists is map. A function returning a list can be looked at as a function non-deterministically returning multiple values. This can be seen when using monadic bind (flatMap/concatMap) with functions returning lists:
Looking at lists like this, a function with only one return value is deterministic, so if it is given a non-deterministic value, the result should be the result of applying the deterministic function to each possible non-deterministic value with no change in the number of possible non-deterministic values, that is to say that it should be mapped over the list!
show $ 1 => "1"
show <$> [1..5] => ["1","2","3","4","5"]
fmap show [1..5] =>["1","2","3","4","5"]
The key idea here is that lifting like this provides a way to easily and extensibly compose functions with different expectations about the context in which they will be executing. This idea could be fruitfully applied in many languages outside of Haskell and I'm excited to see how it interacts with other parts of C++.
This explanation here is an example of the reason that many programmers refuse to touch Haskell, even those that are interested in other functional languages for practical reasons.
> At it's core, lifting allows functions that don't require a computational context to be composed with values and functions that have a computational context.
This means absolutely nothing to me. What's a computational context? Why's it relevant? What does it let me do? What's the equivalent in, say, Python? Is it a necessary or useful concept in languages that aren't purely functional?
The rest of your post follows along the exact same lines. There's no actual explanation, just lines of Haskell code and no reasoning for why this is actually useful for programming in general, rather than simply Haskell elitism.
When have you applied this abstraction in real-world business code, and how has it simplified the code? How have you made sure it's readable to a programmer who may not be as skilled as you?
I'll give it a shot. (Disclaimer: I'm a non-Haskell programmer who's been trying to get his head around some of this stuff, but who may well have it wrong.)
You've got a function f that will do something to one or more values. That function already exists, and it works.
But f works on regular values. You also have some "special" values. chas described it as having a context; the way I've come to think about it is that the value is in some kind of jail. You can't just operate on it freely. (There are a number of different kinds of jails, but all of them mean that you can't just operate freely on the value.)
What we're talking about here, then, is a smuggling operation. It lets you get your function f to the values (or the values to the function, depending on your point of view) so that it can operate on the values. What's a bit neater is that it lets you separate f from the smuggling, so that f remains unchanged. And you can use the same smuggling scaffolding for any f.
How has this simplified code in the real world? I have no idea, but I think I can tell you one place where it doesn't. Haskell tries to be purely functional, but it can't quite pull it off in the area of I/O (kind of by definition - if you're going to have I/O, you can't be purely functional). So Haskell does I/O in a jail. You use this kind of stunt to get access to I/O values. (Does that make it any less a violation of pure functional programming? Not that I can see. Perhaps they're trying to contain the contamination?)
But in C++, you'd just drop out of functional programming for a line or two, and do stream I/O. You wouldn't have this problem at all, so you wouldn't need this solution.
As I said, I'm trying to understand this stuff, and I'm pretty sure I understand it imperfectly. Your mileage may vary...
I'm sorry that my explanation was so alienating to you. It was intended to be practical, but clearly I missed that mark for people without a certain amount of Haskell experience. Is there anything particularly confusing there that you would like explained?
The reason that I used the phrase "computational context" is because functors are very general. A Functor can be thought of as something that lets you translate functions with nothing special about them, just a function from a value to a value, into functions that operate correctly on non-vanilla values. Some examples of non-vanilla values (values with a computational context) are any value in a data structure (this is regular mapping over lists of values and trees of values), values that depend on external configuration or storing internal state (more useful for pure FP), operating on streaming data with functions that don't know about how you are streaming data, operating on data that might be missing without needing a null check, operating on data that might be actual data or an error code, operating on the results of a parser.
To give a python example, the lift function would be a decorator that made the function it was applied to correctly operate on almost any input value you gave it. If you give it a data structure, it is applied to all values in the data structure. If you give it a generator, it creates a new generator that runs the function over each element as it is generated, something like:
return (f(x) for x in argument_generator)
where f is the function that the decorator was applied to. This isn't as popular in Python, but .then() in promises/futures is a lifting function that lifts functions operating on data to functions operating on data that may asynchronously arrive in the future or fail to arrive at all.
This is very practical because it allows you to write code with very little repetition or boilerplate. When someone makes a data structure or processing technique (streaming, parsing, non-determinism), they write the code to operate on values inside of the data structure or process. This code is now only located in the definition of lift/map instead of being spread between every function that wants to work with that data structure, but not take advantage of anything special about the data structure. This removes boilerplates from functions and makes them easier to read and understand. In addition, this makes functions more extensible with less work. Anyone who comes along with a new data structure or processing technique can use all of the existing functions they have without having to write a new compatibility layer between the new data structure or processing technique and their old functions. Writing lifting functions separates particular data structure or processing techniques from general functions. A function can be lifted into many different contexts without any changes. This decouples specific implementations from general functions.
Ah, this is more interesting. I see what you're getting at; a lifting function is something like, in Python:
mapF = lambda f: functools.partial(map, f)
The thing is, in Python, there's a set of "magic methods" that any object of mine can implement (or which more commonly are implemented by generator objects), and Python's built-in iteration tools will then work on them. Using these is then as simple as:
def doAThing(x):
return foo(bar(xyz(x)))
a = MyIterableClass()
b = (doAThing(x) for x in a)
c = (doAnotherThing(self.bluh, x) for x in a)
doTheNextThing(c)
This pattern is reasonably common, popping up all over my code, and seems fairly reasonable to me. It's a bit more typing, but that's never been a bother for me.
When I need to implement a lifting function myself, it's pretty obvious that I do, and it's not something I think particularly deeply about.
I'm just not really sure where the novelty comes from, I suppose. Is this less common in other languages?
You are absolutely right that this isn't particularly novel. It generalizes a pretty wide range of concepts, which makes it hard to talk about accurately without resorting to jargon.
Iterators and generators are solutions to the same sort of problem but aren't quite as general. They do a good job of abstracting over streams and data structures, but I don't think they are as useful for dealing with single pieces of data with something special about them such as computations that could fail. It would be interesting to hack __iter__ to cover that case, but it's not idiomatic python. I think the "special types of data" idea works better in a statically-typed language so it would be non-pythonic on a couple of levels.
I use generators extensively in my python code, but I find they tend to infect programs. Once generators are used in one place, everything that interacts with that code works better as a generator. In Haskell, it is easier to use lifting for part of an expression without affecting the style of the rest of the program.
In addition, Haskell's type system and type inference make it possible to use these techniques in ways that would be very strange and difficult to do in Python which I didn't get in to in my original post because I wanted to stay in somewhat familiar territory. For example, functions are Functors and there are some interesting patterns that can be abstracted over by taking advantage of that. That said, if you thought of lifting like this as a strongly-typed slightly more general __iter__, you could be quite productive with it in Haskell. This is also useful for other languages because it makes it clear how much more general the __iter__ interface is and how it can be applied to things that are not data structures or streams.
http://elm-lang.org/learn/What-is-FRP.elm