Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
C++17: I See a Monad in Your Future (bartoszmilewski.com)
153 points by adamnemecek on Feb 27, 2014 | hide | past | favorite | 165 comments


Since the two main comments in this thread are unnecessary negative, I will point out the ongoing discussion in /r/cpp:

http://www.reddit.com/r/cpp/comments/1z1b1q/c17_i_see_a_mona...

The author of the post is also participating there.


Everyone in the c-preprocessor subreddit is surprisingly open minded about new c++ features.


I honestly don't understand the outrage here. This is about standard library functionality, not new language features. As far as standard libraries go, C++'s is actually pretty small and could use some additions.


(I presume) dmm is cracking a joke about how he thinks "cpp" should mean "C preprocessor" and not C++.


Thanks for the explanation. My mind had been blown for a couple seconds. :)


It's sad to see so much trolling on this language but I suppose we all had to maintain old C++ programs that became spagetti codes and dependancy hells.

I'm currently working on a new project using C++11/14 and it's a pleasure to see more functional features that will allow to be more concise.


I'm of two minds about this: being moderately competent in C++ (my bread and butter language), and a dabbler and True Believer in Better Languages (ala the blub paradox), I think it might be nice and neat to have more high level features in C++ (yay lambdas in C++11x!).

On the other hand, C++ is already a mess. It's not even a beautiful mess, but it is a very functioning mess. Adding more stuff to it just seems like heaping more things on top of a pile that's already falling over in pieces.

Granted, like most everything else in C++, it will probably be optional, and hey, if nothing else, having more (optional) features is very pragmatic.


It's not about a language feature, is it?


I wonder what programming language is not a mess?


Ada is not a mess. C89 was not a mess. Almost all languages become a mess when the become older or when standard committees are involved. For C++ it was a mess very early. After C++98, it was an awful mess of compiler incompatibilities. Strangely, a usable language seems to have recently emerged from this mess.


The ones that nobody improves.

Either because they are new, because nobody use them, or because they are LISP.

I really don't like how no lisper thinks it's a good idea to improve LISP, but it's this way.


I'll just state the obvious: Scheme. ;)


And Unlambda.

Joking aside, Haskell is pretty pleasant to program in most of the time.


forcing strict evaluation feels like a mess, template haskell feels like a mess, debugging space leaks feels like a mess, not to mention that the haskell learning curve is very high

Haskell is pretty good at a lot of things, so don't misunderstand me, Haskell it's really good! But, at the same time, it still has lots of room for improvement. I guess perfection doesn't exist.


Oh, there's room for improvement. I guess we just used different standards for `mess'.

I think something like http://lambda-the-ultimate.org/node/1446 could be the way forward. (And would be achievable from Haskell with enough work.)


Everyone and their grandma recommends me Scheme, but i am more willing to try a lisp that i can actually use in production like Clojure, i don't know.


The assertion was "not a mess," not "usable in production." ;)


Touché ;).


Why is Scheme unusable for production?


I wouldn't even think where i would use it, can you enlighten me?


Scheme with the batteries included:

http://racket-lang.org/


I guess the Katamari Dama-C++ is finally big enough to start picking up the functional programming languages. Will the ball never stop growing?


... Start? C++ has been picking up functional programming paradigms pretty much since its standardization and the inclusion of the STL in that standard. C++'s evolution has been inexorably in this direction for a long long time.


C++ has been an awful language for FP. First of all, you can't do FP without lambda expressions. It's just way too painful to do it without lambdas. Fortunately C++0x took care of that.

Then C++ is a too low level language. Memory management is a bitch to deal with. FP is comfortable only in garbage collected languages, or otherwise you need a really, really well designed library and unfortunately STL ain't it, as STL has been plagued with memory leaks for years and still is. And many things in it are simply broken.

For me the test for FP is pretty simple - a language is FP-capable if there exists for that language a comfortable and reliable library for persistent/immutable collections/data-structures. And this is in fact not enough, but rather the bare minimum. You simply can't talk about functional programming without immutable/persistent data-structures.

Can such a library be designed for C++? It's much harder to do it than in say, a JVM language, or in Javascript, or in any garbage collected language, because of the memory management. Persistent data-structures rely on structural sharing for efficiency and in turn you rely on the garbage collector to take care of the junk when no longer needed. As an exercise, try to implement an immutable data-structure in C++, even a simple one like a linked-list and watch in horror the output of Valgrind as you're using it.

I do think it's possible though, with a little care towards designing it. I think the TFA explains quite well the value of good design and even takes memory management into consideration. If more people actually do FP in C++, maybe one day we'll have an awesome standard and an awesome library that we could use for our FP needs.

With that said, I really hope they won't leave std::future as is and I hope they do improve it. Adding a broken interface to the standard is not OK. If you can't design well an interface, then adding it to the standard does more harm then good. On the other hand the commute seems to be aware that std::future needs fixing, so that's good.


Funny you should mention persistent data structures in C++. That's exactly what I've been working on. Here's my blog about leftist heaps translated from Okasaki directly to C++: http://bartoszmilewski.com/2014/01/21/functional-data-struct... . It follows previous ones about lists and RB trees.


To be clear, I made no value judgement as to whether it was doing it well in that comment, only that it has been marching there for over a decade now.


Not features, languages. We must be nearing the end of the game... C++ may be picking up whole countries soon.


Won't do countries, because its Unicode support is garbage.


I appreciate this article's clear explanation of the topics! I've yet to really get my head around monads et al., but I could follow this article through from beginning to end.

Thanks!


From skimming the article I can't tell if C++ will have anything akin to type classes, which is how Monad is expressed in Haskell? Is that what "Concepts" will bring and which enables Monad to be expressed as well?


Right, the article in question is just pointing out how C++ is implementing a specific monad. In this case the values of futures and then the futures library implements all the functions require for them to be a monad. The C++ spec certainly doesn't call them that and don't provide a generalized monad.

I think the main point to take away is that this is a case where a monad is the right abstraction and can be implemented very logically, but since C++ doesn't and cannot really speak about monads directly, there comes a lot of boilerplate and you lose a lot of "free" utility functions you'd get in a language that can talk about arbitrary monads.


The concept (lower case c) of monads cannot be represented in C++, but you can certainly write monads in it like in just about any language. You just can't make the compiler verify that what you have is actually a monad. As you suspect, Concepts (capital C) should make it possible to type-check monads in C++ like in Haskell).


You can't make the compiler check monad laws in Haskell, either. It would seem to me you should be able to write a Monad template class, though I admittedly haven't tried.

Edited to add: Ah, it seems a template argument can't itself be a template, which would seem to rule it out (though I'm running on very little sleep, so I might very well be wrong).


You can use templated template parameters in C++ - http://stackoverflow.com/questions/213761/what-are-some-uses...


Ah, so you can! Here's the list monad. I'm not sure the type unification on templates is sophisticated enough to get all of the niceness of Haskell monads (I wasn't able to get bind to work generically, for instance, and I'm not sure what would happen with another instance of pure differentiated only by return type...) and certainly there's no do-notation), but clearly the functionality can be expressed. Still fun. It prints the results of expressions prefixed by their (loose) Haskell equivalents. Applicative included for good measure.

    #include <iostream>
    #include <list>
    #include <functional>
    
    
    using namespace std;
    
    // fmap :: (a -> b) -> f a -> f b
    template <template<typename> class F, typename A, typename B>
    F<B> fmap(function<B (A)>, const F<A> &);
    
    // pure :: a -> f a
    template <template<typename> class F, typename A> F<A> pure(A);

    // appl :: f (a -> b) -> f a -> f b
    template <template<typename> class F, typename A, typename B>
    F<B> appl(F< function<B (A)> >, F<A>);
    
    // join :: m (m a) -> m a
    template <template<typename> class M, typename A>
    M<A> join(M< M<A> >);

    // mbind :: m a -> (a -> m b) -> m b
    template <template<typename> class M, typename A, typename B>
    M<A> mbind(M<A> &x, function<M<B> (A)> f);
    
    template <typename A, typename B>
    list<B> fmap(function<B (A)> f, list<A> xs) {
        list<B> ys;
    
        for(auto x : xs) {
            ys.push_back(f(x));
        }
    
        return ys;
    }
    
    template <typename A> list<A>
    pure(A x) { return { x }; }

    template <typename A, typename B>
    list<B> appl(list< function<B (A)> > fs, list<A> xs) {
        list<B> ys;
    
        for(auto f : fs)
            for(auto x : xs)
                ys.push_back(f(x));
    
        return ys;
    }
    
    template <typename A>
    list<A> join(list< list<A> > xss) {
        list<A> ys;
    
        for(auto xs : xss)
            for(auto x : xs)
                ys.push_back(x);
    
        return ys;
    }
    
    template <typename A, typename B>
    list<A> mbind(const list<A> &x, function<list<B> (A)> f) {
        return join(fmap(f, x));
    }
    
    template <typename A> ostream &operator<<(ostream &out, const list<A> &xs) {
        int i = 0;
    
        cout << "[";
    
        for(auto x : xs) {
            out << (i++ ? ", " : " ") << x;
        }
    
        cout << " ]";
    }
    
    int main() {
        list<int> xs = {1,2,3};
        function<double (int)> halve =
                [] (int x) -> double { return 0.5 * x; };

        function<double (int)> quarter =
                [] (int x) -> double { return 0.25 * x; };

        function<list<int> (int)> repeatOnce =
                [] (int x) -> list<int> {
                    list<int> ys = { x, x };
                    return ys;
                };

        function<list<int> (int)> repeatIfEven =
                [&] (int x) {
                    if(x % 2) return pure(x);
                    return repeatOnce(x);
                };
    
        list< function<double (int)> > fs = { halve, quarter };
    
        cout << "return 7 = " << pure(7) << endl;
        cout << "xs = " << xs << endl;
        cout << "fmap halve xs = " << fmap(halve, xs) << endl;
        cout << "fmap repeatOnce xs = " << fmap(repeatOnce, xs) << endl;
        cout << "fs <*> xs = " << appl(fs, xs) << endl;
    
        cout << "xs >>= repeatOnce = " << mbind(xs, repeatOnce) << endl;
        cout << "xs >>= repeatOnce >>= repeatOnce = "
                << mbind(mbind(xs, repeatOnce), repeatOnce) << endl;
        cout << "xs >>= repeatIfEven = " << mbind(xs, repeatIfEven) << endl;
    
        return 0;
    }


Nice result :)

Btw, it's tempting to use std::function in the same way as you would use a function in Haskell, but it usually isn't a very good idea. In C++ std::function is for when you want type erasure, usually because you wish to store a functor somewhere with a uniform type. Type erasure in C++ is similar to existential typing + a type class in Haskell. You could say that all Haskell closures are implicitly using type erasure and then the compiler has to work hard to optimize away the extra boxing/indirect calls.

Here is another way to write map in C++:

    #include <iostream>
    #include <list>

    template<typename F, typename A, typename B = typename std::result_of<F(A)>::type>
    std::list<B> map(std::list<A> const & xs, F f)
    {
        std::list<B> ys;
        for (auto const & x : xs) ys.push_back(f(x));
        return ys;
    }

    int main()
    {
        std::list<int> xs{1, 2, 3, 4, 5};
        auto ys = map(xs, [](int i) { return i * 2; });
        for (auto y : ys) std::cout << y << "\n";
    }
You can expect this code to run exactly as fast as if you had written the loop by hand instead of using the map template function.


Yeah, wasn't meant to be idiomatic C++ code, just a simple proof-of-concept. If you'd care to translate it into the former I'd be fascinated to see the result. Playing around a bit, I've been having trouble expressing the type of mbind when I make the function type a parameter to the template. I'm 70% sure it's my failure, not the language, though.


That's not really true. A C++ compiler will verify that an instantiated template type checks. What concepts gives you is better error messages, overloading, and I presume it allows type checking of uninstantiated templates.


C++ doesn't need type classes as it has more powerful overloading.

this is valid C++14:

    template<typename A, typename B>
    auto add(A a, B b)
    {
        return a + b;
    }


I wouldn't say "more powerful", maybe "less constrained" (and not that that's a good thing).

One thing C++ templates can't express e.g. is polymorphic recursion. http://stackoverflow.com/questions/10321856/does-c11-support...


Wrong link, that's a different issue (type recursion). This is a good discussion about limitations of polymorphic recursion in C++: http://lambda-the-ultimate.org/node/1319


Can overloading express parametricity?


Can you give an example of what you mean?


E.g. the type of map is

    map :: forall a b. (a -> b) -> [a] -> [b]
The important part here is the `forall'. This means, that Haskell guarantees that essentially the map function can not look at the elements of your list. The only way for map to produce a value of type `b' is to apply the given call-back on an `a'.

So you know that `map' can not eg add up your `a's.

Compare C++'s templates and overloading, where more function definitions with more specific types have precedence over more general ones.

See the first answer of https://stackoverflow.com/questions/18180428/is-parametric-p... for some discussion.


This doesn't appear to have anything to do with type classes or overloading, but rather with (unconstrained) parametric polymorphism.

Such a function is easy to write in C++, but the function prototype can not provide a strong guarantee that your b value will only be produced by your callback.


There are limits placed on you by parametric polymorphism. Adding typeclass constraints in Haskell is how you weaken those limits in a controlled way. So I think it has quite a bit to do with typeclasses. I agree "overloading" might not be the correct term to use here, although it is itself a bit... overloaded.


> but the function prototype can not provide a strong guarantee

It doesn't provide any such guarantee, and I think that was the whole point.


Minor nit, but I believe that his description of the async pattern in future is not only monadic, but also comonadic.


i prefer "holoambiadic".

in all relative seriousness, would anyone care to put forward a concise definition of "monad" (for programmers, not math-category-theorists)?


Functor is something that has a map function.

A monad is any generic class that has a flatMap function and a factory function that together satisfy 3 laws (explained below)

flatMap is like map, except the function passed to flatMap must return a monad (of the same kind).

The factory can make a monad of a regular value (it wraps it)

For example, an array's flatMap function takes a function that takes an element and returns an array. You can return an empty array if you wish, or an array of two elements (of the same type). flatMap "flattens" the result - it returns an array of elements of that same type.

The factory function and flatMap must satisfy 3 laws: left identity, right identity and associativity. That is, flatMapping with the factory function should result with the same thing as what you started with; wrapping a value with a factory then flatMapping it should result with that same value being passed to the function you passed to flatMap; and it shouldn't matter whether multiple flatMap calls are chained, or nested.

Like design patterns in OO languages, it isn't immediately clear why its useful to give a name (or a type) to all generic classes that satisfy this "interface" called monad, but it turns out that the concept pops up everywhere, from arrays to futures to Maybe (to replace null values) to Either to Observable and so on.

Not sure if I managed to be correct, understandable, concise or neither :P


ok i'm going on a limb here and suggest that you're probably "correct". but you need to use more terms selected from the following list (random usage is encouraged but idempotent): closure, adjoint, operator, endofunctor, universal algebra, maybe, iteratee, continuation, surjective, type constructor, simonpeytonjonesing


To start with, Functor gives an generic interface (fmap) allowing you to take

    Foo<A>
and a function

    B f(A)
and apply it to get

    Foo<B>
Then monad gives us something called join, which lets us take:

    Foo<Foo<A>>
and turn it into

    Foo<A>
It's pretty obvious that a Vector of Vectors of A's can be turned into a Vector of A's by concatenating them into one Vector. Also optional (something I don't know much about, sorry) can be turned from Optional<Optional<A>> into Optional<A> if the outer actually contains an inner optional, and the inner optional contains an A.

So with these two things, we can do things like:

    Vector<B> f(A);
    Vector<C> g(B)

    Vector<A> foos = {a,b,c};

    foos.fmap(f).join().fmap(g).join()
to get a Vector<C>. The result is basically considered all possible results of applying f to all the values in foos, and then applying g to all the values returned by all the calls to f. this is the non-deteminism monad, where each function f and g can produce multiple results (including none) for each input, and we can chain multiple non-deterministic computations to get all the non-deterministic results.

Usually Mopnad is explained with return and bind, but I think it's a bit less obvious. But I'll give it a shot...

bind in Haskell is something that takes something of type

    Foo<A>
and a function with type

    Foo<B> f(A)
and produces something of type

    Foo<B>
From above, we can rewrite the example with

    Vector<B> f(A)
    Vector<C> g(B>
    Vector<A> foos = {a,b,c};
    
    foos.bind(f).bind(g);
and get the same result as before. I guess you could use slightly cleaner syntax (why the hell not overload >> for yet another bizarre use right? [leading to foos >> f >> g;])

I hope that you can see that this is just a simple interface which can work for many types that look like Foo<a> (Vector, Set [assuming C++ has that?], Optional, even pointers to some extent; see the discussions of the introduction of "?." into C# which is exactly what the Maybe monad encodes). There are also plenty of other monads in the haskell universe which aren't as easy to show in C++ because a) I'm not sure C++'s type system is up to it and b) my C++ knowledge is not strong enough to demonstrate things like the State monad, and definitely the Cont[inuation] monad.

Anyway, bring it back to the topic of the article, it should be clear that something of type

    Future<A>
can be composed with other functions w3hich produce more Futures:

    Future<B> f(A);
    Future<C> g(B);
    
    Future<A> async_A("thing");
    
    asyc_A.bind(f).bind(g);
   
Which handles all the unwrap and passing of values "inside" the future to each consecutive function.

I know this isn't a concise explanation, but it's because it's a very simple, but also very abstract idea, and its best introduced with example uses, then generalising it.


Good explanation. To add, you have (at least) two choices to implement Monads for Vectors. The one you explained is akin to the standard Haskell List Monad, and the other one is the ZipList.


Yes, but only if you turn a blind eye to `get' being a partial function (ie it might fail or diverge).


>Combining functions using next, unwrap (or, equivalently, bind), and make_ready_future is equivalent to specifying data dependencies between computations and letting the runtime explore opportunities for parallelism between independent computations.

He wants a jit for cpp?


What I would like is a way select() over a set of futures.


How would that be different than the `when_any` function the author described?


I believe the difference is that you wouldn't have to then iterate over all the futures to see which one fired, it would just return one of them.

Boilerplate prevention, I think.


Thanks, somehow I forgot about the author's complaint about the API of `when_all` between the time I read it and the time I wrote my comment.


Why are all the Haskell and functional folks trying to screw up C++?

I'm sure that's not the actual state of things, but goddamned if that isn't what appears to be happening.

EDIT:

Before downvoting, consider the elaboration I've replied below with (and then you can at least downvote that too!).

EDIT2: HN wouldn't let me submit this, so is edit:

The basic issue is not that futures/promises/monads aren't awesome--I'm quite, quite sure that they are the future (harhar) for solving a lot of things. I use them extensively, for example, in Javascript. They're Good Things (tm).

It's not that template metaprogramming is bad, or not useful, or that Boost isn't our lord and savior for greenfield C++ code.

The problem is that this shit still has to interoperate with code written more than two decades ago.

C++ began life as a weird ball of OOP and templates slathered over C. It failed to fix some of the real, true, awful parts of C (pointers, for example), and those are still around.

Pray tell me, why do I want a language which has both atomic exchanges (sorta, kinda, depending on your compiler and environment) and monads and function mapping? Doesn't that strike anyone else as, oh, I don't know...ill-focused?

C++ is simply too large a language to recommend to a beginner, too difficult to actually make guarantees about (when you can at all!), and in general the whole rotten mess should be cut in half.

Into, perhaps, C, and Haskell.


Why do you feel that this is "screwing up C++"?! I think you need to motivate that a lot better, especially given that this author provided logical reasons for his issues with the design of std::future. :(


Simon Peyton Jones has always said that the secret goal of Haskell is not to get you to write Haskell but change the way you think about programming. Seems like we've failed to avoid success again.


What exactly are your objections?

Is it the naming convention? Functor, Monad, Applicative, bind, join, fmap etc? Yes, Haskell can be a bit confusing at first. But if you don't like them you can urge the standards committee to use different names.

Or is it the concepts themselves? That would be pretty sad. A Functor is a really basic concept. You probably use it every day, in one form or another. I'm pretty sure of that. Once you understand this basic concept, you can start building the others on top of that knowledge. It's all about building a common language and understanding of these basic concepts.


You know, if C++ used the concepts and just put sane (non-abstract-algebra) names on them, that might be really useful. I might get Haskell a lot better if the C++ committee does this well.


They are simple concepts, just more abstract than you're used to. They actually turn up all over the place - they're a bit like classes were before object orientation was popular, and everybody was building structs filled with function pointers in C in order to emulate polymorphism. The things these concepts address are things you're doing in your code today, by hand, and don't realize how they can be compressed, mostly because C++ has historically lacked a concise notation for type-inferred lambdas.

They're able to turn a lot of algorithms inside out; make the kernel a simple function of values where they used to be threaded through the code flow, and use things like bind to mix them back in. This makes algorithms testable and more composable, and raise the level of abstraction in the rest of your code so that you're operating more at the DSL level for your domain, whatever it is.

It's worth learning the concepts. Once you do, you'll be surprised that there's so little "there" there.

And their commonality and generality are exactly the reasons why they should have short yet precisely general names.


Most of the comments on here seem to be talking about whether the name is good or not. For me, the question isn't what a 'monoid' is called, or whether it can be defined in a few lines of English, but rather what it is that they are useful for.

I've been using Scala full time for a couple of years now, but it took me ten months to really understand not just what monads are, but how they are useful in expressing a program from a different perspective to other approaches, and the benefits that come from that. Now I use them all the time and see how they add huge benefits to my code.

In all the time that I've been working with Scala, and building my knowledge and experience of how to apply functional principles usefully, I have yet to (knowingly) use a Monoid, or encounter a situation where it was clear that they would have a benefit. I'm not saying that there are no such situations, and it may be that there have been situations where they would have been useful, but I didn't know it, and I would love to be able to add them to my toolbox. However, I do see that a lot of the time these concepts are introduced in a very abstract way and what it missing is not the simple definition, but rather some clear idea of when, how and why to use them.


Yeah, moving away from the abstract algebra names definitely improves clarity because ``template <typename HasABinaryAssociativeOperationAndIdentity>`` is a much better choice than Monoid.


Your examples are either at the extreme of literalness, or at the extreme of abstractness.

The most useful name is probably somewhere in the middle. It may not necessarily consist of everyday words, but it probably relates very obviously and clearly to the C++ functionality.

"Monoid" is a very detached term. It doesn't obviously relate to the C++ construct or concept the way that terms like "typedef", "template", "stream" and so on do for other constructs/concepts.


Not really I've had Java developers look me in the face and say that the nonsense names (very much like the one above) that Spring uses are somehow more readable than simple operators or abstract names.

Monoid is NOT a detached term it says I have a type with an identity element, and an associative binary operator, and that that (id `op` elem) always is elem. Try to tell me that i complicated in the slightest. I can most definitely not give you a single line description of templates, or most C++ features. Abstraction makes things more general (i.e less specific) meaning it is easier to describe, reason about, and discuss because I can only talk about things that hold universally.

If you doubt me let's show some things that form a Monoid right now, and how easy it is to define.

1) Strings - identity element = "" - associative operator = +

2) Lists - identity element = list<A>() //empty list - associative operator = concat

and we can go on and on finding instances of Monoids simply from a one line description.


Monoid is NOT a detached term it says I have a type with an identity element, and an associative binary operator, and that that (id `op` elem) always is elem. Try to tell me that i complicated in the slightest.

Just for the record, I'd like to state that I couldn't make heads or tails of this explanation. 'Identity element', when thinking of C++, just makes me think of a hash function or something.

Looking at the examples, it's sort of clearer— a monoid is a type that can hold a value that's empty (or zeroed, or whatever). But that doesn't change the fact that that explanation (and the term 'monoid') is so firmly rooted in math jargon that it's basically incomprehensible to someone used to thinking in C++.


> is so firmly rooted in math jargon that it's basically incomprehensible to someone used to thinking in C++.

It's firmly rooted in like 5th grade math. Every child in the US that takes basic algebra in school learns about identities and identity rules like "0+x=x" or "1*x=x" or in pseudocode as something like "(id `op` elem)=elem".


Every child certainly doesn't learn the term "monoid" though. The issue here isn't the concept; it's the terminology.

Possibly, C++ should adopt the name anyway, but it's by no means a commonly used or understood term by most users of C++. As has been mentioned, C++ is above all a language whose design is driven by pragmatism, so if another more common term can be found for the idea of a monoid you can bet it'll be preferred (even at the expense of some literary accuracy).


I'm not sure that Java programmers are necessarily the best judges of good naming conventions. I've personally witnessed several of them immediately mention "gonads" when first hearing the term "monad".

For most C++ terminology, you don't need to give a single-line description. The keyword or concept name alone generally embodies that information very well. To use your example, at a basic level a C++ "template" is quite similar to a form letter, stencil or other real-world "templates". It's a predefined mold that you inject your specific information/material into to get a final product with minimal effort.

There just isn't a direct relation back to the real world with a term like "monoid". You end up with people who are confused at best, or more likely they're thinking of some other word that sounds similar but is totally unrelated.


> There just isn't a direct relation back to the real world with a term like "monoid".

And that might even be a good thing. The stencil analogy for C++ templates gives you a warm fuzzy feeling, but doesn't actually help you program. Contrast: the one line definition of Monoids is all there is to them. That's all there is to know about Monoids.


Monoid is a semigroup with an identity, and also a group without inverses. Learning a small amount of algebra shouldn't be scary, and the vocabulary will help you in countless mathematical fields - many of which touch directly on programming.


> Monoid is a semigroup with an identity, and also a group without inverses.

Classic. Sometimes Haskell fans have useful things to say, and sometimes they're just using terminology to try to win an intellectual dick-size war. The tricky part is telling the two apart.


It's basic abstract algebra - this is literally stuff I learned in a math class 12 years ago. It's used in hojillion papers, some abstract and some concrete and highly relevant to real world problems (including encryption and linear algebra - both of which can be directly useful to programmers), and they're not even hard. Just learn some math - this is undergrad stuff.


The concepts themselves aren't difficult to understand. It's just that some people go out of their way to use uncommon terminology that ends up obscuring what they're trying to say, rather than making it clearer.


Is there more common terminology for "a set with a binary operation that is associative and has an identity"?


Any name that we'd come up either wouldn't capture the general case or would be just as arbitrary as Monoid ( ignoring historical usage). The gist of this argument is that Pacabel and the like can't describe the concept using the traditional OOP evocative naming style but insist that term "Monoid" is wrong because of some irrational aversion to terms from abstract algebra. There's really nowhere for this argument to go...

Imagine trying trying to describe a monoidal category or topoi in the OOP style, it's like trying to describe what a Fourier transform is to someone who insists on using roman numerals.


> it's like trying to describe what a Fourier transform is to someone who insists on using roman numerals.

Or describing Latin declensions to someone who reads kanji. The key is that, if the audience admires Latin speakers, you can appear smart.


You're right, why use precise language to describe abstract concepts when we could just communicate knowledge using longform metaphors and cryptic analogies. Or maybe we could just grunt at each other like animals.


It's good to use precise, specialised language to communicate with other people who are familiar with that specialised language. Using specialised language to "explain" another piece of terminology from the same specialised vocabulary, though, doesn't create the impression of someone who is genuinely interested in communication.


It may not necessarily consist of everyday words, but it probably relates very obviously and clearly to the C++ functionality.

Such as?

"Monoid" is a very detached term. It doesn't obviously relate to the C++ construct or concept the way that terms like "typedef", "template", "stream" and so on do for other constructs/concepts.

Well of course! Monoid is far more general than those other concepts. It's also a far better name, being unambiguous. Those other names are reused all over the place in different contexts and their meanings subtly shift between their uses. Monoid is not so overloaded; once you learn it, you know what it means every time you encounter it.


I'm not going to claim to have a better name at this moment.

However, I would like to think that there's somebody out there who could come up with a name that describes the concept sufficiently, while still using pragmatic C++-style terminology.

All I'm saying is that "monoid" is terminology of a style that's very different from basically all other C++ terminology. To many C++ programmers, even those with a background in mathematics, it's no better than gibberish.

At least the existing C++ keywords and terminology tend to be far more descriptive of what they're referring to, even if there is ambiguity in some cases. This is true even for the more abstract concepts in C++.


"Monoid" is a very detached term.

What does that mean? How is it any more detached than object or method?


Both "object" and "method" are common words with very obvious parallels between the programming language concepts and similar real-world concepts that are familiar to basically everyone. That's what makes them good terminology.

An "object" is something specific that exists (implying it can likely also be created and destroyed), and can be distinguished from other objects. This applies equally well to a combination of a data structure and some related code as it does to a tennis ball, or to a car, or to a book.

A "method" is a systematic way of doing something. That "something" could be the operations described by some source code, or it could be tying a shoe, or even brushing one's teeth.

"Monoid" has no good real-world parallel. It has no related or relevant meaning outside of very, very specific contexts. That's what makes it "detached". It's off on its own, with an isolated and non-obvious meaning.


'"Monoid" has no good real-world parallel. It has no related or relevant meaning outside of very, very specific contexts. That's what makes it "detached". It's off on its own, with an isolated and non-obvious meaning.'

It's jargon, but it's jargon shared across a few fields, which makes it more worth learning. And while it lacks a "real-world parallel" it has a lot of simple examples. Lists over concatenation, integers over addition, positive integers over max, negative integers over min... It's interesting and informative to observe why integers over max (or min) is not a monoid but is a semigroup.


If you limit yourself to concepts you can express by relating them to the physical-world objects and human actions then you lack the language to express abstract concepts entirely. The words used by mathematicians are chosen to describe these structures have no real-world connotations because the ideas, in their-full generality, have no representation in our every-day experience. There's no way you'll ever be able to describe any reasonably complex structure from abstract algebra by concatenating a bunch of household adjectives, it doesn't capture the full-generality or convey any more information.


So what kind of object is a multiplication, or an addition? Monoid has "no real good real-world parallel" because it is not a real-world concept. -- "It has no related or relevant meaning outside of very, very specific contexts" -- Well, that's the very nature of these abstractions; they apply to many many contexts, when working with data.


You are being too optimistic about object and method. (They look familiar because of long exposure to OOP.)

The Haskell people share your concerns. Simon Peyton Jones famously suggested "our biggest mistake [in designing Haskell was u]sing the scary term 'monad' rather than 'warm fuzzy thing'". (https://research.microsoft.com/en-us/um/people/simonpj/paper...).


'"Monoid" has no good real-world parallel. It has no related or relevant meaning outside of very, very specific contexts. That's what makes it "detached". It's off on its own, with an isolated and non-obvious meaning.'

Also, this objection would seem to apply equally well to functions and variables as monoids.


I wouldn't say so. "Function" is a pretty common term used to refer to somebody doing some specific task. There's an obvious correlation between an employee tasked with a carrying out a specific role or action, and a chunk of source code that does something specific.

And the same holds true for "variable". It's a very common term for things that change their state of being, including stuff that isn't directly related to programming or mathematics. Stuff like the weather, somebody's mood, and so on.

"Monoid" just doesn't have any everyday meaning like those terms do, especially a meaning that so closely resembles the programming concept like in the case of "function" and "variable".


The everyday meanings of these words are misleading when applied to the context of a particular programming language. Variables in one language are not at all related to variables in another. Mathematical variables, for example, do not change at all. Similar differences exist for functions and objects. These highly-overloaded terms are a constant source of confusion for people trying to learn different programming languages.

Monoid, on the other hand, has no such problem. If you look it up, its meaning in mathematics directly translates to its meaning in Haskell.


"These highly-overloaded terms are a constant source of confusion for people trying to learn different programming languages."

Yeah, they mean yet something else in statistical software.


Do you mean names like RAII? I always break my tongue on this one.


The former was chose to make a point, but I contest the fact that Monoid is extremely abstract. It's very concrete, it effectively describes an interface and if implemented according to certain laws it is a Monoid just like a Container or Iterator. The name has existed for nearly a hundred years in mathematics, it relates very obviously to any data structure that is a monoid.


Perhaps the problem is that there's no good real-world equivalent of a "monoid" that basically everybody is familiar with.

Most people are inherently familiar with the concept of a "container", in the sense of something that holds other things, regardless of whether it's a physical container or a data structure.

"Iterator" might not be as clear as "container", but at least most people are familiar with the concept of doing something repeatedly, whether it's some physical action or executing some chunk of source code.

"Monoid" just isn't like that. It may be concrete when constrained to the context of mathematics or programming, but it's quite meaningless outside of those very specific contexts.


"the problem is that there's no good real-world equivalent of a "monoid" that basically everybody is familiar with."

But that isn't really true. Addition and multiplication are both real-world examples of monoids that everyone is familiar with (obviously, these aren't physical things, but I don't see why that's relevant).


Just realised I may have missed your point; anyway, you're right that "monoid" is harder to find an easily-understood counterpart for than "container". There are examples of monoids which everyone is probably familiar with, but we don't usually abstract from those everyday examples; whereas as well as there being everyday examples of containers (boxes, bags), we are familiar with abstracting away from those examples in everyday life to come up with the general concept of container.


"typedef" is one of the most misleadingly named constructs in the language. You can "define a type" without it, it just changes how things are tokenized. Giving it as an example of good naming is weird.


Don't you mean template <typename HasABinaryAssociativeOperationAndLeftIdentityAndRightIdentity>


It seems a massive violation of basic DRY to come up with new names instead of using those that have been a staple in math for the last 50 years. It's also just about 5 words in total, if you count the most important and widely used Haskell/FP typeclasses. I really hope people in general are able to memorize five words.


>It seems a massive violation of basic DRY to come up with new names instead of using those that have been a staple in math for the last 50 years.

DRY is not abour reusing names from other fields. Perhaps you were going for the "principle of least surprise".

But given that most people are not familiar with those Math names in the first places (including mathematicians not versed in type theory, trust me, I've asked a few Math colleagues from university and it's chinese to them too), I doubt even that applies here.

So, no, some arbitrary names like Monoid and co, obscure even for people with a passable Math knowledge, picked back in the day, is not the best naming scheme for that behavior that people can come up with.

The best that it has going for it is that is has been used in Math "for the last 50 years" (still, hardly a "staple").

A lot of math naming and notation is arbitrary and a historical accident. That was one of the things the "axiomatic language" guys tried to solve back in the '20s after all.


If you don't know what a Monoid is you are probably a pretty poor mathematician as it is core structure to do even undergraduate abstract algebra.

Many computer scientists already understand reasoning about abstract structures and use them in their work.

For example commutative operators are incredibly powerful tool in distributed systems, as I am able to disregard ordering. In general it is useful to be able to say that if I meet an abstract specification (i.e a Monoid, Group, Ring, ect) because it allows me to perform operations with confidence that they are correct.

If you doubt the usefulness of this just look to Twitter (can't get much more "real world" than that) and see how they have been leveraging abstract reasoning in projects like Summing Bird (and its sub-projects like Algebird, Bijection, ect).


I think what you meant to say was "you are probably not very well versed in a specific branch of mathematics." Because otherwise you're coming off as needlessly--and baselessly--arrogant asshole who also has mistakenly generalized his specific area of expertise far beyond what it actually is.


>If you don't know what a Monoid is you are probably a pretty poor mathematician as it is core structure to do even undergraduate abstract algebra.

Well, I asked undergraduate math students on their final year, and they didn't know. Perhaps all PhDs in math do, but that's not much of an argument in reusing those terms in CS.

Myself had several classes of algebra in different years in university (for the CS degree) and we didn't ever talk about Monoids and such.


> Well, I asked undergraduate math students on their final year, and they didn't know.

I have no reason to not believe you, but how is this possible, and what does it say about either those students or that university or even both? Do they also not know what a group is? A ring, a field? How can a mathematician (and a nearly graduated student in maths is certainly a mathematician) not know what basic algebraic structures are? I'm shocked, frankly. It's literally equivalent to saying they don't know what real numbers are, without exaggerating one bit.


>* I have no reason to not believe you, but how is this possible, and what does it say about either those students or that university or even both? Do they also not know what a group is?*

They should know what a group is, because even us, CS students, were taught group theory for a whole semester.

As for the monoid thing, I guess they could have been taught at some point, along with being taught 20 other branches and fields of mathematics in different classes, and they were more likely to remember a) the basic stuff, b) the stuff they started to specialize on and concentrated upon writing their thesis. So, I guess if I had talked to someone who had taken a preference to type theory and such, they would have known.

>How can a mathematician (and a nearly graduated student in maths is certainly a mathematician) not know what basic algebraic structures are? I'm shocked, frankly. It's literally equivalent to saying they don't know what real numbers are, without exaggerating one bit.

Is it? I don't know. You can read whole books in Algebra and not mean the word "Monoid" once in them, including some university books.

For example (not even one mention): http://www.amazon.com/gp/search?index=books&linkCode=qs&keyw...

This again no mention: http://www.amazon.com/gp/search?index=books&linkCode=qs&keyw...

This (a "graduate level" book) only mentions Monoid once, somewhere on the first pages, and adds that it won't be emphasised: http://books.google.com.sg/books?id=C4TByeUh9A4C&printsec=fr...

And I'm pretty sure in our Algebra books (for CS students, 2 classes in the 1st and 2nd year) there was no much mention of monoids (or, possibly, the professor skipped over those chapters, maybe had some brief explanation and went on to other things).

Of course, what university gradutes of a field do not know can sometimes be even more daunting. For example, a surprising percentage of degree-ed Computer Scientists cannot even solve the fizz-buzz problem:

http://blog.codinghorror.com/why-cant-programmers-program/


Of course, to not remember exactly what a monoid is is completely fine for a mathematician who's field isn't abstract algebra and related stuff, but to be clueless as to what the concept is at all is somewhat disturbing. I would have no problem if they said something like "ah, yeah, it's a little simpler than a group, I can't remember the exact properties ATM".

Hm, I just gave a quick glance to one group theory textbook (the only one I have in English), and to be honest in the first 20 or so pages there is no mention of the word "monoid" (as far as I can see, I really just quickly scanned through the first chapter) but it does mention semigroups (which are just an identity element away from monoids)...

> And I'm pretty sure in our Algebra books (for CS students, 2 classes in the 1st and 2nd year) there was no much mention of monoids (or, possibly, the professor skipped over those chapters, maybe had some brief explanation and went on to other things).

Yeah, and that's fine, but for maths students to not even remember that it was mentioned is kind of sad. It shows they don't really care that much about what they're doing. If it wasn't even mentioned by the professor, shows a disinterest on his/her part to actually teach concepts. It's always much better when teaching a concept to also at least mention related concepts or similar ones at different levels of abstraction, that gives a complete picture in which you can neatly place things. A group doesn't just appear out of nowhere (although it certainly can be defined on it's own), and it's not alone...

> For example, a surprising percentage of degree-ed Computer Scientists cannot even solve the fizz-buzz problem:

Sigh... I know. But you see, that's why I found this discussion about the unintuitiveness of the word "monad", here of all places, absurd. I wouldn't expect people who cannot solve fizz-buzz to have any clue about anything, and on some level I'm fine with that, this is not an elitist rant, but people in a place like this? After all, programming routinely involves dealing with concepts that are much more unintuitive that a monoid, IMHO.


We are not mathematicians. Those terms are not really part of our industry.


Though many of us are computer scientists (at least by education), and engineers. Vocabulary does not just manifest out of thin air. Where do you think the words function, or algorithm came from? I can tell you they were in use before 1950. We have already collectively borrowed a huge amount of vocabulary from mathematics. There is nothing wrong with adopting a few more that accurately describe what these objects are. For example we could call Functor Mappable or some such but it falls short in describing the abstraction. For example functions of one argument form a Functor and its "map" is function composition. So for functions Mappable is a pretty poor name.

Edit: Yes languages get this wrong but I was focusing on the borrowing of vocabulary more so than the correct usage of it. Many words are abused like Functor to mean function object in C++, functions referring to procedures, ad-hoc vs. parametric polymorphism, strong and weak typing ect.


Unfortunately the term 'function' is somewhat misused in most programming languages, where 'procedure' would be a better term. The Wirth languages, Haskell, and Nimrod are the few that seem to get this right.


Ada too, it has distinct procedures and functions; the latter producing only one output based only on its inputs (though I believe recently they added the ability to also have out parameters so multiple values can be returned).


Eh, sorry, but you work with a machine whose purpose is to manipulate mathematical constructs, following a mathematical algorithm, receiving mathematical symbols as input, and outputting that same class of symbols, originally invented by mathematicians, and improved by them through several iterations until they achieved the form they have now.

Oh, and lots and lots of your peers are formed in an specialization of mathematics called "Computer Science".


Of course. But the fact that I'm doing this work on a device built from electronic components does not make me an electrical engineer, and the fact that physicists discovered the principles which make semiconductors possible does not make the work I'm doing a part of physics.

One might as well say that biology is a branch of philosophy, because it was the source of all the natural scientists; but such a description, in modern times, would not be particularly informative.


Um, which industry do you work in? Enormous amounts of words from mathematics have been incorporated into the programming lexicon. Vector, matrix, index, graph, tree, function, domain, ... the list goes on and on.


The words you listed tend to have well-understood meanings in numerous contexts aside from mathematics that are still related in some way to the software concept.

Some of those words are very likely non-mathematical words that were adopted and re-purposed by mathematicians. "Tree" is inspired by how the data structure's shape is very similar to that of trees (that is, the plants), for example.


You missed the point, they all come from mathematics and their meaning in the context of programming is precisely related to their mathematical context.


What's wrong with the names? I see this complaint everywhere but not a single person has yet managed to explain to me what's wrong with them. Is it because they're from math? Perhaps we should rename them after dog breeds or something.


Nothing is wrong with them. A lot of people aren't used the level of abstraction that Haskell uses, so they look at a specific instance of the structure (often the List type) and think of a name that fits that instance ignoring the more general class. You end up with names like Appendable ( Monoid ) or FlatMappable ( Monad ) both of which give horrible intuition for the general case.

I think the abstract algebra names are great because they describe the general case precisely without any preconceptions carried along with them.


"I think the abstract algebra names are great because they describe the general case precisely without any preconceptions carried along with them."

And there's papers about interesting things you can do with them.


What's wrong with the names is that they don't communicate very well to the 95%+ of programmers who aren't in the FP/Haskell world. That's most of the people who are going to try to use it on C++.

I mean, some concepts are clearly not able to be explained to the layman; you need some background. But do people really need to learn abstract algebra to be able to get their heads around this stuff? Are there no better terms that communicate to more people?

And, given that that's the problem, clearly dog breeds are not the answer either.


What's wrong with the names is that they don't communicate very well to the 95%+ of programmers who aren't in the FP/Haskell world. That's most of the people who are going to try to use it on C++.

I'm not a C++ programmer; I don't know what half the names are from the GoF book. Does that make them bad names? No! Since when should a word have to carry around its own definition so that people who've never seen it before automatically know what it means?

I mean, some concepts are clearly not able to be explained to the layman; you need some background. But do people really need to learn abstract algebra to be able to get their heads around this stuff? Are there no better terms that communicate to more people?

Concepts like Functor and Monoid and Monad are extremely simple. You don't need to go back to school to learn them. You only run into trouble if you expect them to be explainable by analogy to a more concrete notion.

And, given that that's the problem, clearly dog breeds are not the answer either.

It was a sarcastic remark but it highlights an important point I want to make. Assuming you knew nothing at all about dogs; if you were given a list of dog breeds and a stack of photos of dogs, would you be able to match the breed names to the photos? No, probably not. That doesn't mean their names are bad; you simply lack the experience to figure it out.


I think the point is that if you did get into C++ you'd get a pretty good grasp of the keywords and other common identifiers without even having to look them up. They are mostly all designed to appeal to intuition.

>Concepts like Functor and Monoid and Monad are extremely simple. You don't need to go back to school to learn them.

The issue here is not at all how complicated the concepts are. The issue is how common and/or descriptive the name for them is. "Monoid", to someone who hasn't studied abstract algebra, is neither. Something like "Appendable" might be more appropriate for C++.


> "Monoid", to someone who hasn't studied abstract algebra...

Oh come on, man! I can't believe this ridiculous discussion. I'm not lashing out at you, don't get me wrong, but seriously, you think you need to study abstract algebra to understand what a monoid is? Um, I remember one of the first things mentioned in the, literally, first lecture in undergraduate course on calculus was a monoid, among other algebraic structures. Granted, it was theoretical physics, but still, I doubt any respectable university teaching CS would fail to give students at least some familiarity with basic algebraic structures. But you don't even need a university education to grasp this, seriously, stop being afraid of precise terms, there's nothing scary behind them.

EDIT: And to actually address your point :), "intuitive" can only go so far. Sometimes there is simply no intuitive term that can do justice to the concept at hand. As Feynman once said: "I'm not going to lie to you, I'm not going to tell you it's like a ball bearing on a spring because it isn't." Especially in programming I'm not convinced that intuitiveness of a mere label for something is a valuable goal. You can't program on intuition, at some point you just have to learn how the thing really works.


> you think you need to study abstract algebra to understand what a monoid is?

No no no. My whole point is that while the concept is not complicated, that particular term ("monoid") simply isn't common.

I agree with your edit section though. And I don't necessarily think C++ shouldn't adopt the term 'monoid' due to its relative obscurity (especially if no better term can be found); I just think it's got some figurative points against it for that reason.


> Something like "Appendable" might be more appropriate for C++.

So I can append False to True to get False (Boolean monoid under &&) or append 8 to 9 to get 72 (integer monoid under multiplication)? There's also a monoid instances for any single-argument function into a monoidal type, where (f <> g) x = f x <> g x; I don't know what to call that but it's definitely not appending.

Append is a name that works in a few cases but horribly breaks down in the general case.


"Why is it that 'positive integers and minimum' is appendable, but 'integers and minimum' isn't?"


Not every monoid has an append operation, most don't.


To be fair, Monoid's operation in Haskell is called 'mappend'.


"Haskell calls it that" isn't the same thing as "its a good descriptive name".


Of course.


Math has a tradition of one-letter names. I mean who in their right mind would name anything with rich semantics as just a, b, c, or x, y, z, or B, C, D... Oh wait...


We are talking about the names of algebraic structures, not symbols in equations.


It was a joke. Notice the letters B, C, D... do they look familiar?


Answered in sibling post above!


Some, not all! I'm a Haskell folk and I've no interest in changing C++. I'd rather just use Haskell and teach others about it! Isn't that novel?

Personally, I don't think you'll ever reach perfection if you restrict yourself to adding features. Sometimes you've got to take them away and that's not really feasible with a mature language like C++.


What do you think about a language like Rust that is trying to bring some of these ideas into a C++-like language from the get-go, without the necessity of interacting with code written more than two decades (or one decade, or one year, or sometimes even one month) ago?

I'm curious because you seem to have made two different points: 1) it doesn't make sense because adding new features to such an old and widely used language is hard, and 2) it doesn't make sense because of the kind of language C++ is. (1) isn't true for Rust, but (2) is.


So, I don't know Rust, so I'm not going to try and voice an opinion that would be unsubstantiated.

As for your observed point 2 though: at some level, systems languages need to allow the twiddling of bits on and off, perhaps allow the direct interaction via assembly with the processor, have some notion of native word size, and perhaps also make simple common idioms in low-level programming (stacks, interrupts, and so forth).

It is very rare (in my experience, which may not be accurate) that these issues should be even visible at semantic layer concerned with mapping functions over data and other functions, with sorting sets, and sending messages. It's simply too polluted a headspace.


In my experience, C++ is really a double-edged sword in this regard.

At its best, it provides tools for building cost-free abstractions that hide bit-twiddling and assembly hacks while providing an expressive set of building blocks for the semantic layers above them. In this case a programmer only thinks about one layer at a time, so the mapping and messaging can be tuned to their domain, but the implementation is abstracted.

At its worst, it provides the verbosity of Enterprise Java with the memory safety of C and it becomes very difficult to trace what the program is doing and how. I have found this is very strongly related to the widespread use of non-smart pointers. This lets the semantic domains smear into each other and tends to produce astonishingly brittle code.


>Why are all the Haskell and functional folks trying to screw up C++?

All 10 of them? You do understand that these C++ proposals and additions have nothing to do with Haskell people, but come from seasoned C++ veterans, like Herb Sutter and such.


Actually, Haskell people do influence C++ rather heavily. The ideas behind many of the functional aspects of C++ can be traced back to Microsoft Research in Cambridge, UK, where Simon Peyton-Jones works and whose collaborators/students went on to design F#, which very strongly influenced C#, with major contributions from Erik Meijer, a Haskeller extraordinaire and the inventor of LINQ and Reactive. Microsoft PPL library, which is the driving force behind most recent concurrency extensions to C++ has been created by functional programmers. The truth is that C++ cannot survive in the multicore world without an injection of functional programming.


Can you explain what about this article makes you think Haskell people are screwing up C++?


Even if you feel that way, remember part of the philosophy of C++: You don't pay for what you don't use.

You don't like this? Fine. It's pretty easy - don't use futures, and it won't be in your face. You can ignore it completely and use the rest of C++ as you see fit.


You don't have to use C++XX features you don't like. One of the more beautiful things about C and C++ that I like is that I can go as "bare bones" as I want and still have an incredibly powerful (not to mention performant) language to program in. You didn't (and still don't) have to use STL (any version). You also don't have to use the lambda function syntax or range operator, move operators or rvalue references, or basically any other thing that is "new" to the C++XX standard(s). In fact, if you eschew these things your code is likely to be more portable and longer lived, although it may require more time to develop, and include more re-implementation of boilerplate.

Also it's amusing you opine that pointers are "awful" in the same post in which you rail against others' for "screw[ing] up C++". Pointers are one of the best things about the two languages in my opinion. I love the power and flexibility they provide.


You mean like they screwed C#? Or Harmony?


yep


> Pray tell me, why do I want a language which has both atomic exchanges (sorta, kinda, depending on your compiler and environment) and monads and function mapping? Doesn't that strike anyone else as, oh, I don't know...ill-focused?

Dunno, but I think part of the reason Rust is popular lately is that it combines safety and functional features with high performance by default and the ability to /theoretically/ easily drop down into lower-level code to eke more out.

I think C++ is an unnecessarily complex language for various reasons, but adding some basic functional methods to a STL class isn't the end of the world on that front and sounds fairly useful.

p.s. Java has atomic exchanges.


They're actually trying to fix C++. :)


One doesn't fix perfection.


It's easier to start from scratch.


Do that and tell us how many people use your language. We'll be waiting.


Here's hoping Rust will give them a run for their money.


The article discusses a proposal to add new functionally to part of the standard library, which just happen to mimic a monadic interface. I fail to see how it even affect the language.


I find it quite funny. Monads were considered a hack to functional programming to add some imperative features like, you know, IO and similar funky stuff for which there was no math theory. Now seeing them grafted to a language that is considered the golden measure of all imperative languages is kinda LOL.


That's somewhat backward. Monads come from Category theory.

A while ago the FP people figured out that their `hacks' to do IO actually conform to the structure of a Monad. So they exploited the fact. Now the author sees that the `hack' to do futures in C++ also conforms to the structure of a Monad, and suggests exploiting that fact.


Sure, monads come from category theory. IO and interactivity doesn't.

Yes, though futures were always like that when you look at Java for example. Java always suffered from the need to prepare complex design patterns to do simplest things as it was much less flexible than C++. Now C++ is getting there as well.

I personally would stick with some earlier C++ standard and use Haskell for my functional programming needs. Feels much more natural than trying to bend each language to support every thought process imaginable, and doing a poor job at that.


Imperative languages seem like state monads. But a monad doesn't need to be a state monad. If you want something different than a state monad, it might not be any more simple to implement that in an imperative language than a declarative one.


Why are people trying to ruin the beauty of functional programming by grafting it on like a ninth leg to C++?


I for one welcome our muddy::future!


This is a great example of how theory enthusiasts ignore practical world. Think about similar ironic claim by a C++11 guy: Haskell, I see a lambda in your future!


Please no. I liked C. Watching it die the death by a thousand cuts that is C++ featuritis is painful.


Eh, C++ was always a bit of a lost cause with respect to "featuritis", wasn't it? It's not like you can't still use C.


You can still use C! C++ has always been the sum of all the programming language features Bjarne had heard of and liked, so this is simply C++ continuing in it's traditions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: