Hacker News new | past | comments | ask | show | jobs | submit login
C++20 Ranges (ericniebler.com)
196 points by ot 86 days ago | hide | past | web | favorite | 160 comments



At first I wondered why he calls the monadic flatmap operation "for_each" but it makes sense in that he's actually creating a (lazy) list comprehension DSL here. The code within main() is mostly equivalent to the following Python code:

  triples = ((a, b, c) for c in itertools.count()
                       for a in range(1, c+1)
                       for b in range(a, c+1)
                       if a*a + b*b == c*c)

  for (a,b,c) in itertools.islice(triples, 10):
      print("{}, {}, {}".format(a,b,c))
The equivalent in Rust, just because I'm currently learning it:

    let triples = (1..)
        .flat_map(move |c| (1..=c)
            .flat_map(move |a| (a..=c)
                .flat_map(move |b| Some((a, b, c))
                    .filter(|_| a*a + b*b == c*c))));
    
    
    for (a,b,c) in triples.take(10) {
        println!("{}, {}, {}", a, b, c)
    }


I was anyway thinking that his example looked incomprehensible, but your Python example makes a complete mockery of it.

The difference is night and day, I'm dismayed at how horrible that C++ code is, hope I never have to deal with such crap.


> but your Python example makes a complete mockery of it.

I mean, try getting the same kind of performance and static verification in Python?

So much of the complexity of C++ is due to the static typing and the optimizations that can come with it. If you're willing to give up and make everything dynamic then of course it'll simplify your code, but you get a massive penalty both in terms of performance and in terms of error-checking.


Upvoted, but to be fair, Rust shows that expressiveness and performance aren't mutually exclusive. It does not (currently) have language-level list comprehensions, but those could be easily enough implemented as simple syntactic transformations over the existing iterator combinators, the way Scala does it.


Thanks, yeah, I was very careful to avoid claiming they're mutually exclusive. However, although I stopped trying to learn Rust back in 2016, unless it's changed its fundamentals radically (which I understand it hasn't), I'm not sure if it's really a counterexample here for expressiveness. Can you use different allocators for different vectors in the same function in Rust? Can you have custom move semantics for objects that might know about pointers to them? And does the compiler no longer complain about code that really should work just fine? [1] I understand the answers to most if not all such questions is still "no", at which point I stop believing it really has the same flexibility as C++ or the same ease of writing as Python, but would be happy to hear if things have changed.

[1] http://softwaremaniacs.org/blog/2016/02/12/ownership-borrowi...


In general, Rust will probably never be the sort of Swiss Army chainsaw that C++ is, and that is by design. For instance, there will most likely never be custom move semantics; the fact that Rust moves are always simple bitwise copies is a feature. But we're probably talking about different types of expressiveness—what I had in mind was modern type system features such as algebraic data types and pattern matching, monadic/combinator-based vocabulary types such as Option, Result, and Iterator, trait-based polymorphism and so on. C++ is slowly getting many of these as well, but often the corresponding features feel clunky and "design-by-committee"-ish.


> And does the compiler no longer complain about code that really should work just fine?

In the general case that's theoretically impossible, and also not really a good measure anyway.


Okay, how about in the practical cases that actually come up and make you have to work around the compiler?


Non-lexical lifetimes, which were just recently stabilized [1], make many of the most common cases of borrow checker frustration "just work". And now that the groundwork is done, even more cases will likely be supported in the future.

[1] https://rust-lang-nursery.github.io/edition-guide/rust-2018/...


Confused, how are these common use cases, and how is this "just working"? In both examples on that page there is an unused variable (seems rare to do intentionally?) and in the second case where one of them is used, it seems they just elaborated the error message.


A common use case might be something along these lines:

https://play.rust-lang.org/?version=stable&mode=debug&editio...

While the `entry` API provided an ergonomic, more performant way to do this, this is an example of a common pattern one might expect from other languages that, until recently, did not compile.

NLL and other ergonomics improvements have helped a lot with these paper-cuts.


Incidentally: for what that whole `match` block could be with the entry API:

  *example_map.entry("some_key").or_insert(0) += 1;
Or, if you prefer an extra intermediate for clarity:

  let entry = example_map.entry("some_key").or_insert(0);
  *entry += 1;
This is a good example of how, often, Rust’s approach of having greater underlying complexity but exposing it sanely ends up allowing you to write something that is semantically superior and easier to reason about, while being faster too.

This and Rust’s ownership model (which I declare stands under the same banner) is the sort of thing that I regularly miss when working in JavaScript (and regularly missed when I used to work in Python plenty).


> objects that might know about pointers to them

The Pin<> type constructor is being standardized to support this case. This is not so much "custom move semantics" as a lack of in-built "move" semantics of any sorts, but the basic use case is being worked on.

> And does the compiler no longer complain about code that really should work just fine?

As other people have mentioned, the NLL feature addresses many cases where this used to be a problem, and has now been introduced to the 'stable' version.


I'm not sure about the specific code sample, but take a look at Nim. It's python syntaxy but 'fast' and compiled.


Thanks, I've heard good things about it! I tried for a bit but then had to do other stuff, but I'll finally have to dig into it at some point... just haven't managed to bring myself to make the full investment.


The terrible thing about the C++ and Rust solutions is that one needs to understand many idioms and quirky syntax choices in order to comprehend what's going on.

In Python it's blindingly obvious what is happening: one is creating a group of 3 values, where the values are in specific ranges if a certain condition is met. Itertools.count() is perhaps opaque, but otherwise this is literally how the code reads!

Rust is already going into the weeds after the first '=' with its flat_maps and (1..) = c, whatever the heck that means and the C++ code is not even worth talking about.

Performance is not an excuse, since loops can solve this with good performance and probably the best readability.


> and (1..) = c, whatever the heck that means

The 1..=c is Rust syntax for an inclusive range. Rust also has syntax for ranges that exclude the last value like Python’s range(1, c + 1), which is 1..(c + 1). Using that would make it look more like the other examples, though since the language has syntax for the idea “1 to c inclusive”, it makes sense to just use it.

For comparison Ruby uses a..z and a...z for inclusive and exclusive ranges respectively. I like that Rust’s inclusive range is more visually distinct, and I can see why they went for a symbol that implies the last element “equals” another value, but it does look unusual at first glance.


This exact thing is one feature I like about Perl 6 and (minorly) dislike about Rust.

In P6, 1..10 is exactly the range you'd expect, while ^ is used to exclude either end of the range. For example, 1..^11 is the same as 1..10, as is 0^..^11 (can't say I've ever needed that, but I could see a couple of possible uses).

This also carries over into range syntactic sugar. ^10 is shorthand for 0..^10, and as a result ^@list.elems is a list of valid indices of a list.

I personally find Rust's syntax noisier, but it's a small issue.


A downside of 1..10 is, how do you represent an empty range starting from 1? 1..0? That looks so weird... I'd expect it to mean a backward range starting from 1 and going up to 0 inclusive. A nice thing about end-exclusive notation is you can denote backward ranges, forward ranges, and empty ranges with the same notation.


> In P6, 1..10 is exactly the range you'd expect

That syntax is impressive, handles all four possibilities nicely.

However since reading this short article by Dijkstra, I’ve been convinced that 0..N ranges (inclusive start, exclusive end) are the way to go whenever there’s a choice:

http://www.cs.utexas.edu/users/EWD/ewd08xx/EWD831.PDF

There's certainly a mental benefit of having made a choice and being consistent, which I think is possible with any of the options, but I find the argument of having length = end - start compelling.

It does fall down in some cases, eg when looking for a range over all unsigned ints, as the total number of uints isn’t a uint itself (as uints start at zero), so some way to represent a rage that is inclusive of both ends is still needed.


As a minor nit, it may be better to use `@list.keys` to get a Sequence of indices. Especially if you want it to work when `@list` is `lazy`.


If you want static typing and better performance than Python, there’s still Java and Go.


The big difference is that it's a pure library solution in C++. The code inside main is not too bad once you have the building blocks in place (implemented in a library, of course, not by yourself).


The trade-offs between the two languages are so huge, and you seem unaware of what those trade-offs are.


slavik81 showed a simpler for_each in a comment;

    auto for_each = [](R&& r, Fun fun) {
      return r | view::transform(fun) | view::join;
    };
similarly, it seems that yield_if can also be radically simplified

    auto yield_if = []<Semiregular T>(bool b, T x) {
        return b ? view::single(x)
                 : view::empty<T>;
    };
But if you actually read the blog post... for_each and yield_if are a part of the range_v3 library and are proposed additions to the language.

Also I don't know what the heck is up with that cout statement; we can use structured declarations to make that look more pythonic too. And there's also a proposal for string formatting [1]

  #include "fmt/format.h"
  #include <iostream>
  #include <ranges>

  using namespace std;
  using namespace view;
  using namespace fmt;
  int main() {
    auto triples =
      for_each(iota(1), [](int z) {
        return for_each(iota(1, z+1), [=](int x) {
          return for_each(iota(x, z+1), [=](int y) {
            return yield_if(x*x + y*y == z*z,
              make_tuple(x, y, z));
          });
        });
      });
 
    for(auto [x,y,z] : triples | take(10))
      cout << format("{}, {}, {}", a, b, c) << endl;
  }
So despite all the protests about how ugly Eric's solution is... I don't think the above is significantly worse than your python & rust equivalents. Speaking as a python evangelist who uses C++ out of nece$$ity.

[1] https://github.com/fmtlib/fmt


FWIW, overly long iterator comprehensions, or unwieldy chains of them, are common in Python, but a local generator function is much cleaner most of the time in my experience. The difference is slight here but the rule of thumb never goes wrong for me. Splitting the logic out into a generator function tends to make the flow more readable, and it becomes easier to make changes to the logic or add intermediate filters or variables.


Or in Haskell:

    triples = [(a,b,c) | c <- [1..], a <- [1..c], b <- [a..c], a*a + b*b == c*c]
    mapM_ print $ take 10 triples


In current C++, lots of std functions are annoying because you need to give both .begin() and .end(), while in 99% of the cases when using a C++ container you want to do your algorithm on the whole container (a function with 2 parameters is good to have for the other 1% of cases and the case of 2 pointers of course).

What took so long and why are "ranges" needed for that? Most other languages have sort, transform, ... with a single parameter (such as a container or array), and those don't require something called "ranges" for that.

Why couldn't C++ at least already have long had convenience functions that just take a container and do the .begin() and .end() for you?

I'm all for having "std::transform" and stuff, but if it requires you to type just as much as writing it manually with a for loop due to having to type the name/expression of your container twice, what's the point.

EDIT: oh, and it's going to be std::ranges::sort(v) instead of just std::sort(v)? What's wrong with naming it just std::sort(v)? Do C++ users really need to be reminded it took a "ranges" concept just to sort a simple container every single time they'll use it?


Why it took so long is that:

(a) C++ is an international standard, has a huge installed base of business- and mission-critical software, probably billions of lines of code, tens of millions of programmers, and we can't afford to get things wrong (though we still do sometimes).

(b) Nobody has built a large, _very_ generic, general purpose, high-performance library, on top of the (also new) C++ concepts language feature before. We were figuring it out as we went.

(c) While standardizing this piece, we were also thinking 5 to 10 years down the road and trying to make sure what we were standardizing now would evolve well. For instance, we already know pretty well what using ranges with coroutines will look like (i.e., reactive streams). We already know what other lazy range adaptors we want, and we know they work with the bits we're getting for C++20. We know how to integrate eager adaptors that transform containers and compose. This all needed to be built, tested, and well-understood.

As for why it is `std::ranges::sort` instead of `std::sort`, this exercise wasn't just about adding overloads that take ranges instead of iterators. It was also about rigorously specifying in code what the requirements of each algorithm are. For Reasons, we can't just slap new requirements on old algorithms that have been shipping for 20 years. It will break mountains of code. Nobody likes having two `sort` algorithms in two different namespaces. It is preferable to the alternative.


I’m still not getting your argument. Why is this not “just adding overloads”?


Here's one case that is tricky to get right in c++. At the moment you can call sort(x,y) to sort a range, or sort(x,y,z) to give a comparator. If we allow sort(c) for a container, is sort(c,z) sorting a container with comparator, or a range? It's surprisingly hard to get this right with templates, without concepts.


Do ranges and iterators and comparators even have conflicting requirements? I feel like an object could be all three, in which case it'd be ambiguous what was intended.


No, there's nothing (other than good taste) which stops you from passing a pair of arguments to std::sort() which would be legal for both std::sort(begin, end) and std::sort(range, comparator).


Sounds like they wanted to be a little more aggressive rather than just putting lipstick on that same pig. By specifying the new method they can clean up the rough edges for performance or ergonomics without breaking existing code.


A few points:

If you don't want to deal with begin/end, there are lots of utility libraries, and there's not much reason to standardize it. For example, here is Google's wrapper library (disclosure: my employer) that takes "just the container": https://github.com/abseil/abseil-cpp/blob/master/absl/algori...

Regarding your edit: it's pretty simple in c++ to alias namespaces as needed for your project. The standards committee is working with a big install base, and they have a few hard requirements. They don't break the ABI. They try not to break calling code in other ways. (Adding overloads while client code passes around pointers to your functions can have surprising effects - perhaps causing their code to stop compiling? Pretty sure one of abseil's requests is that clients don't take addresses of their functions for this reason

Stack overflow on taking addresses of overloaded functions: https://stackoverflow.com/questions/2942426/how-do-i-specify...

Abseil compat guidelines: https://abseil.io/about/compatibility)


I agree. The "flexibility" of being able to operate on a subset of the container was never a good argument, because you could trivially make a "view"/slice template type to fake begin/end to any pair of iterators.

Make the common case short and the uncommon case possible.


Ranges unify C arrays and C++ standard library types in a rather elegant, general purpose way. Though, I’m still of the opinion that every function that takes a range should also be overloaded to operate on the whole container as well (and my personal C++ “library” does this already).


> because you could trivially make a "view"/slice template type to fake begin/end to any pair of iterators.

In C++, that was not trivial at all. Implementing a custom container with iterators (which you'd have to do for this - you can't just propagate iterators as is) is very verbose, since there's no syntactic sugar for it, like say generator functions in Python.


> why are "ranges" needed for that?

They aren't, but without ranges, things that accept iterators are more general than things that accept only containers (e.g. you can't iterate over half a container without a range, iterators, or copying to another container through other means defeating which would defeat the whole point.)

As such it makes sense to have a generic range type first, then algorithms that operate on containers/ranges.

Additionally, if you can take a range<Iter> instead of a Container (I haven't even checked if C++20 algorithms do), you might get better error messages (e.g. "Xyz cannot be converted to a range" instead of "Xyz doesn't implement begin()").

> What took so long

Everything takes long in C++ standards land. I have no special insight into the actual workings of the standards committee, but I've always chalked it up to a combination of being extremely conservative about changes, made worse by past missteps burning them, and less than ideal foundations which already made C++ hard to parse, slow to compile, and inconsistent across compilers, and often not fully implemented.

It's not like there's been some giant technical hurdle across the board - boost::range has been around for well over a decade (in boost since 1.32.0 in 2004).

Faster iterating languages usually have a fresh start unencumbered by backwards compatibility concerns with large existing codebases, less cruft inherited from other languages (cough C cough), and a single official implementation which can be used to more quickly prove out good ideas and discard bad ones.


> They aren't, but without ranges, things that accept iterators are more general than things that accept only containers (e.g. you can't iterate over half a container without a range, iterators, or copying to another container through other means defeating which would defeat the whole point.)

Yes but they could have provided both a single-container-parameter version and a two-iterator-parameter version of all those functions like sort and transform. Templates and SFINAE support that, and the single parameter one would call the two parameter one with .begin() and .end() for you.

They should try to make the std library such that you type shorter code, not longer code, imho: longer expressions are less readable. But well, it looks like ranges mostly do that for the user side (except that it uses std::ranges:: instead of just std:: as prefix) so at least that's nice.


> Yes but they could have provided both a single-container-parameter version and a two-iterator-parameter version of all those functions like sort and transform.

For my own libraries I'd skip the iterator accepting version. Implementing one range type is way easier / less prone to oversights and other mistakes than double implementing every algorithm. Or more than double implementing, if you need perfect forwarding for non-copyable functors without std::forward pre-C++11. Even though all they do is forward.

SC++L doesn't get much choice since they started with iterators only, but even there, I'm not sure how much time skipping std::range saves you. You can't even take a plain C style array without either a range type, free-form std::begin/std::end, or the iterator variant.


> and the single parameter one would call the two parameter one with .begin() and .end() for you.

Doesn't work with arrays, since they don't have member functions.


> Why couldn't C++ at least already have long had convenience functions that just take a container and do the .begin() and .end() for you?

It's just a boost header that you can use which does exactly this : https://www.boost.org/doc/libs/1_69_0/libs/range/doc/html/in...

It's your problem if you haven't been using it.


C++ standardized on ranges, but in my experience enumerables (or sequences) are the more common case in real applications.

You might find this intriguing https://github.com/LoopPerfect/conduit


Ranges are nothing more than bounded enumerables (iterables in C++ parlance).

An unbounded enumerable would be represented by some way to obtain a forward-only iterator - also an existing C++ idiom.


Thank you!


I know I'm not going to make a lot of friends with this opinion, but this is a perfect example of everything that's wrong with modern C++.

Seriously who, except for a small circle of academics, actually believes that this is good, readable, concise, maintainable code? Who asked for this?

When I read

    inline constexpr auto for_each = []<
      Range R,
      Iterator I = iterator_t<R>,
      IndirectUnaryInvocable<I> Fun>(R&& r, Fun fun)
        requires Range<indirect_result_t<Fun, I>> {
          return std::forward<R>(r)
          | view::transform(std::move(fun))
          | view::join;
       };
I can't help but being reminded of Java [1][2].

Meanwhile it's 2019 and C++ coders are still

- Waiting for Cross-Platform standardized SIMD vector datatypes

- Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU

- Debugging cross-platform code using couts, cerrs and printfs

- Forced to use boost for even quite elementary operations on std::strings.

Yes, some of these things are hard to fix and require collaboration among real people and real companies. And yes, it's a lot easier to bury your head in the soft academic sand and come up with some new interesting toy feature. It's like the committee has given up.

Started coding C++ when I was 14 -- 20 years ago.

[1] https://docs.spring.io/spring/docs/2.5.x/javadoc-api/org/spr... [2] http://projects.haykranen.nl/java/


This is library code which most users of the range library will not themselves need to write. Using ranges absolutely leads to shorter and more elegant code. The implementation of the library is complicated. Much of what’s likely unfamiliar with this example is the use of C++ “concepts” (a technical term), and is not actually necessary, but is placed there to actually give better error messages to users of the library (without them, template code is basically duck-typed, and can give terrible error messages). I’d argue that the core of that code: the transform followed by the join is actually quite readable.

I also wouldn’t at all say C++ is overly academic. It’s an extremely pragmatic community. Yes, the standardization process is slow, but I don’t view, as a C++ user, the issues you mention as serious. To me, there’s nothing wrong with using high quality third party libraries like boost (and Eric’s range v3 library). My biggest frustrations with C++ are the build and packaging stories.


Boost is not high quality. It massively balloons the compile time, and there's random incompatibilities between minor versions. It's also a hunt to figure out which header I need to include for certain libraries. There's no consistent feeling to the library. My biggest gripe is that it makes vims completion stupid slow by bringing in a large number of headers.


You're talking about compile time and issues with your IDE. It is unfair to call it "Not high quality". We can think about this from an end-user standpoint that it bloats the IDE and impacts usability. Fine. Your IDE/compile times may be different than others.

Boost is extremely high quality in terms of its documentation, algorithms, and readability of the code - which is its core purpose. Please don't conflate your minor development environment gripes with the excellence of Boost. It is really not fair.


I don't think it's an unfair complaint. Developers interface with the language using compilers and IDEs, and until recently (with libclang getting more popular) that interface has been terrible. In a lot of cases it's still terrible. It's a real inconvenience, and there's no benefit coming with the cost.

> Boost is extremely high quality in terms of its documentation, algorithms, and readability of the code - which is its core purpose.

I've used Boost for over a decade now, and IMO "extremely high quality" is a stretch. Some modules are better than others, but there's no getting around the fact that many of them are only necessary because the C++ standard library is so bad.


I am having a hard time conciling compile times as a complaint. That is a compiler problem. Or an inevitable problem as the project grows.

I don’t understand how that is a problem with Boost.

Same thing with the IDE. If you add an external library such as Boost, it is natural that your intellisense or whatever IDE feature may get slower.

Both of these are undisputably an issue with the environment as a whole. How on the earth is it a quality problem with Boost!? Please explain.


Not OP but - Boost is a large library that can be tricky to extract only the portions you want from, leading to instances where you compile a lot of unwanted code, leading to ballooning compile times in exchange for only a little bit of extra functionality that you want. In my experience boost typically ends up being all or nothing, and very hard to remove once integrated.


Has been terrible? IMHO (eclipse and clion user) for C++ big projects the IDE experience is terrible! Vim isn't better as you need to spend a lot of time configuring it plus ctags and multiple tabs doesn't work well together..


Your viewpoint is valid.

But so is the viewpoint that one of the major problems with C++ culture is the misconception that developer experience and high quality tooling are afterthoughts and orthogonal to "quality". In this viewpoint, readability in a editor or IDE at least as important as readability in a browser or email.


I agree that C++ is a big mess.

Complaining about an external third party library that is trying to fix the issues for its compile time and how fast it loads IDE is very obtuse and unfair. Boost is a bandaid. Don't complain about the bandaid, complain about what causes the wound. Overall, I agree with you that C++ has poor user-experience - don't blame Boost for it!


To be fair, it is very common to hear "well, why don't you just use Boost?" in response to the criticism of the C++ standard library. It may be an external third party library in name, but in practice it has long since became stdlib++ for idiomatic C++, and a testing ground for new libraries to be eventually adopted into the standard.


> There's no consistent feeling to the library

Well that's because it is really a collection of mostly unrelated libraries by a large number of different authors.

There is some general theme (emphasis on value based generic programming in the stl style as opposed to more traditional Oop), but that's about it.


Finally! It's not just me.

But...I will say some good ideas made it into the standard thanks to a first implementation in boost, though what ended up being standardized is typically much cleaner and more orthogonal.

Then again this is how we got the standard containers and so much more standard library goodness -- via Stepanov stepping up and writing the STL.

The best way to think of boost as the npm.


> boost is not high quality

This is just false.


You could be right. But this is not a convincing rebuttal.


While I tend to agree with your points overall, I'd like to point out that this part irks me a bit:

> and is not actually necessary, but is placed there to actually give better error messages to users of the library

That's not the first time I see this kind of reasoning; you can find it all over programming. If the alternative is an API that sucks (and yes, template errors in C++ do suck), picking that alternative is not a very viable option.

Another scenario where this often pops up is whenever someone complains about very verbose error handling. If you want to write quality software, you shouldn't just skip that part (in fact I'd say it's the most important part of the code you're writing). In the same vein, there's a talk by Andrei Alexandrescu somewhere (on D), where he points out that probably 0% of the main functions ever written in C or C++ are correct in that they catch everything that can go wrong.

Also, since I find that most code should be written in a library style (i.e reusable, well-encapsulated) except if it's explicitly tied to a specific program (say config or cli argument parsing), I also don't find "only library implementers will have to deal with this" is a convincing argument. OTOH, at least, writing in a library style doesn't necessarily imply overly generic code, so that might not be that much of an issue.


I'm not convinced by the argument that it's fine since most people can just use a library someone else wrote. Requiring complex code to do something simple doesn't bode well if you want to try something complex. And when judging a programming a language it's a bit weird to reason from the perspective of someone not writing the code.


Nothing comes for free though. While we can easily write list comprehensions in Haskell or Python, their implementations are complicated. In the case of Haskell and Python, lists and comprehensions are language features, and for C++ ranges, they’re implemented as a library. Both approaches provide building blocks for users to build software and not have to roll their own implementations.


Haskell list comprehensions are a small layer of syntactic sugar over the monadic implementation, that is implemented on a library (that comes with the compiler, but it's still a library).

It is entirely dependent on lazynes, first class functions and complex compiler optimizations. But there isn't much specific code for comprehensions there.


Most of that code is for optimization or error messages. I think you could strip it down to:

    auto for_each = [](R&& r, Fun fun) {
      return r | view::transform(fun) | view::join;
    };
I also removed inline and constexpr because he didn't need them in his example.

I don't disagree that it's ugly, but it's not as bad as it looks when you go through the standard library. Those are functions that have been written, optimized and generalized to handle all sorts of cases. Your needs are almost certainly more specific, and you will undoubtedly write functions that are much simpler.

That being said, I kind of agree. I would have preferred your list of features to getting Concepts. I find SIMD to be a particular sore spot.


Do c++ regulars eventually just get numb to how hideous the :: syntax is to look at?


I much prefer having separate operators for namespace/module qualification and member access. These things seem like they have something in common, but in practice, IMO, the languages that separate them are much cleaner.

Whether :: specifically is the best choice of such an operator is another question. But there aren't exactly many options left, and this one is firmly established by now, even outside of C++.


PHP was changed last minute to use \ which in my opinion is a way worse choice.


That was a horrible choice if only because \ is already well-established for escape sequences (including in PHP itself).


Not the worst piece of C++ syntax by various orders of magnitude.


I haven't yet... but after I got a bug report for polluting namespaces with my library, I uglified my code like a good citizen.


It is as if we don't learn! Same thing in Rust. :: is adds so much visual noise.

Julia is the only modern language with beautiful syntax. I think we should pay more attention to the aesthetics of a language just like we do with spoken languages. Certain languages just sound beautiful - Japanese and French.

C++ is old but Rust could have paid more attention to subjective aspects of a language.


> Certain languages just sound beautiful - Japanese and French

You've managed to pick the two languages whose sound irritates me the most, which I think illustrates the fragility of this kind of thinking perfectly.


This has been the story with C++ forever. I think currently half the standardization and compiler implementation bandwidth is taken up by turning C++ into the slowest virtual machine ever devised (constexpr).

The constexpr story is presumably very similar to what happened with templates. There was a genuine niche use case, and that was perfectly covered in the initial implementation. But the feature just lends itself perfectly to extremely clever, single A4 page example code. No one ever went "ooh" and "aah" at a basic_string::split paper.


This looks like something a library author would write while implementing a range and not something that end users would write while using ranges. Could you point to an implementation in a different language that offers a similar abstraction in a more succinct manner (While offering the same type safety and low runtime overhead).



No, ranges in C++ are analogous to the iterator trait in rust


I feel like what you’re touching at is kind of the core of why C++ can be so difficult to parse through sometimes. It really is trying to make a general purpose abstraction while also having a low overhead. This results in a need for detail and complexity that you can simply not need to worry about in languages like Python or Clojure.

When my university peers and I were learning C++, a lot of us strongly preferred using C99 since it didn’t seem as awful in terms of complexity and verbosity.


Check out Go


I really, really don't want cross-platform SIMD in C++. The reason I write asm is I know how the machine works and exactly what I want the machine to do. It's already hard enough to get what you want from the compiler and trying to make it cross-platform is guaranteed to make it even harder. If you don't believe me try opening an audio device or a serial port from Java, which purports to contain cross-platform libraries for these devices but in fact makes it totally impossible to use them.

As for std::string I never find myself using boost for any reason so I'm curious what the indispensable boost feature is for strings. The only thing I want from std::string is the ability to resize it without initializing allocated space, the ability to construct a string from an existing data pointer, and the ability to release the data pointer from an existing string (basically the union of a string a unique_ptr).


Do you use std::vector when working with SIMD? I find the SIMD alignment requirements make it difficult to integrate intrinsics with the rest of my code. Maybe I'm just missing something, but I'm having a hard time with it.


That's a good point. If you use a custom allocator with your vector then you can be sure of the alignment, although the compiler will pretend to be unaware of it. All that stuff I want for string goes for vector, too.


Not to mention the nightmare that is organizing the compiler on large projects. I've seen multiple large companies employ teams of 10+ engineers just trying to keep the compiler in order. Optimizations that take weeks or months to develop result in 10x improvements while I'm not aware of developers in other languages having this problem.


This is exciting stuff unless you look at other languages and see what a mess this is.


Java is generally much more syntactically straightforward.


I totally agree. It's stuff like this that will usher in the rust revolution

Edit: also wanted to add - what a disaster


That code sample is outright ridiculous, it is totaly out of control. Can't agree more.


At some point, I agree. But then I see people on here talk up Haskell and lisp, an those are far worse.


That's why people are moving to Rust.


Very excited to be able to use both ranges and concepts soon.

I started writing C++ just before the 2011 standard and it’s been fun to follow and use the improvements over the last 8 years. I can’t imagine how different the language will look (after the 2020 standard) to the eyes of veteran C++ devs. It must already feel like a completely different language.


Yes, it does feel like a completely different language. The sample code in the article is nearly unreadable to this C++ veteran. And based on the verbosity of the result I'm not sure it's an improvement.


I have to agree. Pumping out C++ revisions every 3 years means I can no longer assume a game written in C++ will port relatively painlessly from one C++-compatible platform to another, as it all depends on which C++ standard is supported by the platform SDK/toolset I'm forced to use.



Is the C++ standards committee trying to make the language as incomprehensible to its users as possible?

I cannot even begin to comprehend what assembly is going to be generated in this code sample. Are those calls indirect? Can they be vectorized? How much stack space is this going to use? What (if anything) gets put on the heap?


I haven't tried to compile these specific examples, but a while ago Eric wrote an article about the abstraction cost of (a previous iteration of) the range library. At least GCC generated pretty much the same assembly as the hand written nested for loops.

Since then, both the library and compilers have improved.


The C++ standard has nothing to do with these, and you won’t find a mention of them at all in the text of the standard. The standard defines an abstract virtual machine for running C++ code, and it lays out the guidelines for what compilers are allowed to do with regards to the optimizations you mentioned.


This is kinda true in theory, but about as far from the truth as you can be in practice. Zero-overhead abstraction is a core part of the language's philosophy, and that means features are designed to be translated to "efficient" machine code. If nobody believed that optimising compilers could wring something decent out of these features they wouldn't have been standardised.


The abstract machine is designed to be relatively easy and efficient to implement, but specific features such as vectorization are not part of the standard. Remember, C++ runs on machines that don’t support any “advanced” features, so there is a limit to what can be required in the standard.


I understand what you are saying, and I agree that it's true in principle.

But, if C++ did not provide me predictable optimizations - it would be worthless to me. I know scientific computing is only one domain of computing. But without this level of intuition for code -> asm. My day becomes filled with reading objdump outputs, I don't want that.


I've got the feeling this is the kind of language tooling that will get you a 0.5% compiler-specific performance improvement in very specific real-life work case.

Most companies are still busy dealing with their C++98 backlog libraries. I don't expect this to get in production code within 5 to 10 years...


You cannot understand every assembly generated by modern compilers either. Both C & C++ provide no guarantee for any of these. Since ranges is basically a pair of begin/end iterator, I think it will be almost identical to it.


I think every HR reader just skimmed this article, saw that for_each example and exploded in fury. He does say the below:

"I’m being a bit pedantic for didactic purposes here, so please don’t let that trip you up.I also could have very easily written for_each as a vanilla function template instead of making it an object initialized with a constrained generic lambda."

He needs to clean up the article and write another which doesn't 'play around' syntax quite so much.


It is C++, so it is going to be bashed not matter what, regardless that many languages of similar age have their own set of quirks and breaking changes even across minor versions.


I've been writing C++ since 1995, and programming since 1965. I'm sorry but this stuff is incomprehensible, opaque and cryptic. I don't see how it improves productivity and maintainability. Although I do find the STL very productive.


I don't see how it improves productivity and maintainability

I find that quite hard to believe for someone with admittedly a lot of experience. I mean, just look at the most basic example:

    func(vec.begin(), vec.end())
becomes

    func(vec)
Do you really not see that can have advantages when it comes to productivity and maintainability?


Thank God I'm not the only one. I looked at this and my eyes glazed over.


What's the difference between this and something like python-esque generators which also appear to be lazy and have a much saner syntax? Doesn't C++ plan to support coroutines which should provide a similar way to lazily evaluate lists?

Also the example has a lot of boilerplate code that reminds me of template metaprogramming hell. Is this something that's expected to be wrapped up nicely by Boost & co. for end users?


A c++20 range is just a bounded container. instead of having to write, `(container.begin(), container.end())` for any function that uses iterators to bound itself, you just write `(container)` and it knows its a range.

The range function in python is similar only in name, the c++20 range concept is not a generator.

2. There is a coroutine feature branch, I believe developed by Gor Nishanov, https://github.com/GorNishanov. Also not related to c++20 ranges.

3. Eric Niebler used a single templated function. This is not anywhere close to template metaprogramming hell, and I think you should look into templates as once you learn about them they become way less intimidating. Also has nothing to do with c++20 ranges.


I have yet to see a non-managed systems programming language which supports coroutines properly.

Rust supports async/await, but it's not the same thing, as this comment illustrates:

https://news.ycombinator.com/item?id=15121199


It'll be after 20 for sure, we're still waiting for the executor TS for async networking to be able to come in.


The "for_each" (aka flatmap, or bind, or =<<), "maybe_view", and "yield_if" are basically a DSL for writing (admittedly ugly) lazy list comprehensions (generator expressions in Python or for comprehensions in Scala) in C++ without special language support for them. You're not supposed to have to implement these sorts of building blocks yourself, but they're not voted into the standard either at this point.


Unlike Python, this compiles to pretty optimally efficient code.


Syntax should be dead-simple so that correctness is obvious, compilers don't dump incomprehensible messages on trivial typos, and programmers can focus on efficiency. For example, in something as simple as:

  int z;
  while(++z)
    for(int x = 1; x <= z; ++x)
      for(int y = x; y <= z; ++y)
        if(x*x + y*y == z*z) emit_triple(x, y, z);
a programmer would easily spot that x and y should never reach z, y should be initialized to x+1 and that this will stop working as soon as z outgrows 16-bit values. A programmer would also see that this is a horribly inefficient algorithm.


Also, an electric shock should be emitted via the USB port whenever the compiler detects that a programmer has written 'for', 'if', and 'while' without a space, making them difficult to distinguish at a glance from function calls.


And let us not forget the blasphemy of bracketless blocks of code. Or uninitialized variables.


I'd been working for probably about 5 years before a colleague moaned at me for not doing this - and that is how I discovered that it was a thing people did. I looked again at the code, and... well, what do you know. About 75% of my colleagues did it too! I'd never noticed.


Function calls can be written with spaces, too...


May you burn in hell forever if you code your for/if/while without spaces and your function calls with spaces...

Remember that a compiler is not the only entity reading your code, other humans have to read it too. Have pity on them, they're not as flexible as your compiler.


How does one try out these work in progress features? I tried compiling the example on godbolt.org with gcc-trunk and wasn't sure how to get it working.

https://godbolt.org/z/YSPXLu


Web Archive mirror -> http://archive.is/lnrTw


I feel like it should be called c::


Having switched away from C++, I've got the feeling that this kind of code:

inline constexpr auto for_each = []<Range R, Iterator I = iterator_t<R>, IndirectUnaryInvocable<I> Fun>(R&& r, Fun fun) requires Range<indirect_result_t<Fun, I>> { return std::forward<R>(r) | view::transform(std::move(fun)) | view::join; };

is a sort of magical incantation you spend 3 days to tweak, it essentially just works, but becomes un-understandable within 2 weeks (even from the author), un-reviewable and code that everybody is afraid to touch (or even get close to) ?

Don't get me wrong, there is definitively a lot of flexibility I'm missing from C++, even the problem can be addressed otherwise with other language, but things can get very, if not utterly... convoluted.


This should be a library function that will not be exposed directly to most developers, who only need to know for_each(Range, fun) which is pretty straightforward.


I've heard that line of reasoning before, and it doesn't make sense to me.

Specifically, I find it alarming that it assumes two tiers of competent c++ developers: a lower tier that just uses libraries, and a higher tier that can write good libraries.

I see two problems with this view.

First, in my experience most large codebases are structured as many layes of libraries. So the envisioned non-library developer may be in the minority.

Second, it's a red flag that the language's complexity dangerously high. If nothing else, one of the envisioned non-library developers may unwittingly write code that uses or misuses these features, and still need to debug incomprehensible compilation- or runtime-errors.


> If nothing else, one of the envisioned non-library developers may unwittingly write code that uses or misuses these features, and still need to debug incomprehensible compilation- or runtime-errors.

The reason that the code is this complicated is so that end users don’t shoot themselves in the foot (performance-wise, correctness-wise) when using these features. And error messages will get better when concepts land.

To address your main point, though: there is always a separation between library authors and library consumers, even if they are written by the same people, because the requirements are different in each case. A library author must write general and useful code, while a library user can “get away with” hard coding logic and customizing uses to only cover what they need. Hence, the job of a library author is usually much harder.


people will end up using that... A former colleague spent a week simplifiying a 10 lines templates to use C++14's std::integer_sequence, nobody could review his code.

Doesn't really matter at the end. I'm fairly certain the code in question was canned later on by more "old school" devs...


I only see a database error on this website.


I just upgraded my GCE instance. Let's see if it can withstand this withering assault. <makes popcorn>


It's the infamous HN hug of death.


+1

> Error establishing a database connection


Congratulations Eric! Awesome work.


my biggest concern is that Scott Meyers will not be around to explain all the new revelations (like concepts) to us mere mortals - he says he was done with C++.


I think the C++ committee has gone a step or few too far. This code is horribly ugly.


Why would anyone invest in this? I thought C++ was basically in maintenance mode given that greenfield software tends to be written in Rust...


C++ is extremely widely used. Especially since C++11, the language has received a ton of attention for making things more ergonomic. As projects update compilers, they are able to take advantage of these updates. C++ is anything but in maintenance mode.

Rust fits in a similar niche, but it has a much smaller following. Several of it's main contributors just happen to be HN regulars, so it gets more attention than it otherwise would.


> C++ is extremely widely used.

C++ is not memory safe - which is one reason why e.g. Java and C# are even more "extremely widely used" than C++. Rust is memory safe, and has C++ -like performance (and arguably better than C++, with e.g. its restriction of pointer aliasing and its high-level support for things like simple parallel programming).


Its very easy and rapidly becoming even easier to write modern c++ code that doesn't have dangling pointers/refs. The built in static analysis that Rust relies on to make its memory safety guarantees is no longer a real advantage. This type of criticism should have stayed in the 00's tbh because it really isn't true now. It's rust which will be playing catch-up to gain feature parity with c++ in the coming decade.


ooth, 99% of the C++ code out there is not using any of those "features".


A lot of C++ code bases are stuck in legacy C++03 or even earlier, for a variety of reasons. Most new code makes heavy use of standard library features such as iteratators and smart pointers.


It depends, some C++11 features are widely used, like smart pointers. I agree that most of the features are seldom used and add tons bloat to the language.


By the amount of downvotes I got you seem to be right.

I was genuinely under the impression that C/C++ was being progressively phased out in favor of Rust, but looks like this is hardly the case. I guess I should probably peek outside the HN filter bubble more often...


C++ has had major specification updates (some are smaller than others) in 2011, 2014, 2017, with a planned one in 2020. This pattern will be ongoing for the foreseeable future.

These specification changes are usually implemented by compilers _before_ the final release date. Some features take longer, but you're usually only look at 6-12 months before being able to use 100% of cutting edge.

If anything... C++ is ramping up.


I like Rust, have dabled a couple of times in it, see a possible great future for it in the next couple of decades, given that it is being adopted by the likes of Google, Microsoft, Oracle among others.

However I will keep writing C++ to go along my Java, .NET, nodejs production code, for the time being.

Rust is still lacking in IDE support, mixed language debugging, OS SDKs I generally use, integration with binary libraries, not everyone on the teams is willing to add yet another language to the mix, NVIDIA GPUs are designed for C++, and newer features make it easier to write safer code, in spite of its C underpinnings.


Rust and C++ usage are pretty much inversely proportional to their editorial frequency on HN.

Yes it looks phylosophically nice, but out of the language purist&geek, managers will not care if you are allowed or not to shoot you in the foot, they'll only care if you can deliver or not in due time.


I have not heard of a single new project being done in Rust amongst all my friends and colleagues, but I see new C++ projects cropping every few weeks.


> I thought C++ was basically in maintenance mode

You are terribly mistaken.

> given that greenfield software tends to be written in Rust...

You are terribly mistaken again.


>> > given that greenfield software tends to be written in Rust... >> You are terribly mistaken again.

True, but I sure hope that changes sooner than later. The top comment here echoes my sentiment about what C++ has become, while another one illustrates how cleanly the example can be done in Python and Rust. The problem as I see it is that long time C++ developers have learned all this arcane BS right along with its development. Anyone new to a language will vomit when they try C++ compared to Python or Rust. The C++ language developers are probably the least able to see what the problem is in this regard.

On a slight tangent, IMHO this approach to language development is at the heart of the issue that lead Guido to step down as Python lead. One of the proposals that got accepted crossed the line between readable code and "this is a compact way to do this but isn't intuitive on its face".


I learned c++ fresh two years ago. While I enjoyed learning python, I had the vomitous reaction to rust, not c++. In fact, I find modern c++ to be quite similar to python for many problems.


Do you have a link to share about that proposal? As a Python fan I'm very curious.


I assume they’re referring to PEP 572 (assignment expressions)


ScyllaDB and Seastar are good examples of modern C++ that are "greenfield" applications, and have great performance and a very good developer experience.


The truth is almost nobody uses Rust.


There is at least one order of magnitude more C++ code than Rust code, and not all projects can switch over at the drop of a hat.


I would guess at least 3 or 4 orders of magnitude.


That's much more realistic. Right now if you were to guess the number of people currently employed primarily as C++ vs Rust programmers, the difference is going to be well over 100x.

Not to mention the long history of C++. Rust has a very long way to go to catch up.


Are you new to programming? There are entire industries using only C++ for their software (like game industry)


>tends to be written in Rust.

Nobody uses Rust in production. Why such thoughts in general?


Rust is used in production by companies like Amazon, Facebook, Microsoft, Dropbox...

(Still way less than C++, of course. But we’re making big strides!)


Unfortunately a generation of developers was raised in c++ and java. The new generation writes rust fortunately so this kind of eye hurtful syntax is going away




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: