Hacker News new | past | comments | ask | show | jobs | submit login
C++ Core Coroutines Proposal [pdf] (open-std.org)
144 points by anqurvanillapy 6 months ago | hide | past | web | favorite | 116 comments

It seems that the main issue that the authors of this proposal have with the current coroutines proposal in the committee is that that proposal effectively requires allocation for all coroutines, while offering promises that the optimizer will be able to avoid the allocation in common cases. This competing proposal makes allocation opt-in for various reasons, not the least of which is skepticism about how well the optimization will perform. (See "Implicit allocation violates the zero-overhead principle" in this document [1]).

[1]: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p097...

This is the main issue that zig has with LLVM's coroutines implementation, as well as:

* buggy - we currently have to disable all optimizations in coroutine functions because mem2reg reverts correct coroutine frame spills back to invalid parameter references

* compilation speed - llvm's coroutine splitting code is extraordinarily slow.

I had a chat with Eddy B about how coroutines work in rust, and I think it's a much better approach. Rust does its own splitting in the frontend, and avoids LLVM coroutines in the IR.

But yeah, non-guaranteed memory allocation elision is just not going to cut it. Especially if we need guaranteed memory allocation elision in debug builds where we don't have time to run any optimizers.


The people that actually implement coroutines in the compilers actually think that eliding the allocation is very doable.

This is definitely wrong :)

Besides Richard, who is the code owner of the clang frontend , chandler is the #5 contributor to LLVM (see here https://github.com/llvm-mirror/llvm/graphs/contributors), and a quick trip to llvm-dev will tell you how much time he spends working on optimizations :)

FWIW: The GCC folks i pinged were similarly unenamored with the idea of trying to elide allocation all the time.

Personally, i've seen a bunch of these "required optimizations" over the years, and they rarely pan out without significant help. (IE required tail call optimization usually requires special marking/calling convention change, etc).

Full disclosure: Chandler, Geoff, and Richard are in my org, and i am theoretically their boss's boss. I say theoretically because it's pretty meaningless - i'm not going to tell Richard how to do C++.

Richard Smith, quoted in the document I linked as saying "I do not have any confidence this optimization would ever become reliable in the way that, say, certain forms of NRVO are," is the code owner of the clang frontend.

It's also true that relying on the compiler to optimize some stuff away (instead of making the overhead explicitly opt-in) is not very C++-ish in nature.

I'd like to introduce you to the wonderful world of templates.

But that's the thing, you have complete control over templates, there's no compiler magic involved. If you're unhappy with the way a template call sequence gets optimized you can tweak it completely. In your comment below you mention boost but that's kind of my point: boost is a completely optional third party library, not a built-in construct of the language like the coroutines discussed here. Boost might be relying on the compiler correctly optimizing some constructs but the C++ template mechanics themselves not so much.

Do you have any examples of templates incurring a runtime cost at -O0 compared to a “no templates” approach?

Expression templates (ala boost::spirit) definitely come to mind. The callstack in a -O0 build can get very, very deep.

Heck just using std::numeric_limits<int>::min() will incur a function call in -O0, but nobody cares since it gets optimized in -O1 and up. The same thing applies more and more as you invoke templates from templates, but, again, nobody cares since we are confident it'll get optimized/inlined easily.

It's not nearly as prevalent in modern code since constexpr static variables are now a thing, but relying on these optimizations has historically been super important.

I would say the opposite, actually. Idiomatic C++ relies heavily on the compiler to optimize away copies, inline functions and objects, optimize away vtable lookups, etc.

I mean yeah, at a very basic level you always expect the compiler to do the most obvious optimizations but the same could be said about C or any language that's higher level than assembly. It's not that you shouldn't expect the compiler to optimize your code, it's that you shouldn't be expected to write code in a sub-optimal way only to hope that the compiler will have your back.

You can choose to use classes and inheritance without using virtual calls, which means that you won't have to use vtables at all, compiler or not. You can rely on the compiler to optimize return values but you can also use a return parameter and not rely on that at all.

I'm not saying that it's a bad thing to do it any other way but I maintain that saying "just make things easy and let the compiler sort it out" is very much not how C++ has been designed so far, for better or worse.

It seems there's a weird fetish in C++ land where non-letter characters seem more "pure" than words. I remember suffering from it myself, implementing atrocities like a subtraction opeartor on a path class which would give you the relative path between two paths. It made me feel very clever but I feel sorry for anyone who would have to look at my code now and try to figure out what the hell was going on.

It's because defining "customization points" as operators make it easier to write templates, essentially. See "C++ doesn’t know how to do customization points that aren’t operators": https://quuxplusone.github.io/blog/2018/08/23/customization-...

This is one of the downsides of not having typeclasses that is often forgotten about.

Wouldn’t uniform call syntax (like in D) solve most of the issues outlined above? I.e. it always boils down to “primitive types can’t have methods”; but uniform call syntax would let us call any function like a method on its first argument...

Or just allow methods to be defined on primitive types, like Rust.

Uniform call syntax is a superset of that, since you can’t have virtual methods on primitive types.

That's why I cannot wait for concepts

Often the reason is not wanting to introduce new keywords for backwards compatibility reasons. And when keywords are introduced, often they have to be awkward (like `co_await`) to minimize conflicts with existing code. Context-sensitive keywords are one solution but can make parsing even more complicated in places where arbitrary lookahead might be needed.

That’s also why the sizeof... operator was chosen. It’s an abuse of notation, because it returns a count of elements rather than the sum of sizeof applied to each, but ... and sizeof were already reserved.

C++ land is actually quite conservative.

You should try Lisp, ML, Haskell Julia lands regarding that matter.

Developers with math background on languages with symbols as identifiers can get quite creative.

At least with Haskell (I can't speak for the others), you don't redefine existing operators, you add new ones. Which is less confusing in some ways

Is it really less confusing that + for numbers has another meaning that + for a newtype that implements Num?

I think so, since you can also count on that type to implement the other required methods of Num.

Right, the fact that functions and operators must be part of a typeclass to be overloaded makes a huge difference. Sure, you can define your Num instance however you want—but that doesn't change the fact that by providing a Num instance you're asserting that the type is "number-like". The typeclass also enforces certain restrictions, such as the fact that the arguments and return value of (+) must all be the same type, which helps curtail the more egregious abuses like the repurposing of the C++ bit-shift operator for stream I/O. And since the set of possible operators is so much larger (almost any sequence of Unicode symbols) there isn't a great deal of pressure to abuse the existing notation. In C++ you can only override the limited set of built-in operators, even if the concept you're trying to express isn't closely related to any of them.

It helps that Haskell operators are namespaced just like functions, so you can choose which operators to bring into scope from other modules.

There is no way to enforce that + actually implements the concept of adding two values.

With dependent types you can. I'm not aware of whether that proof is possible in Haskell, however.

Besides, that's not what I was saying. What I was saying it that + in Haskell means more than an overloaded operator. It means the type has access to additional methods. This gives you more information.

In that case you get the same information on any usable C++ IDE.

It's weird how opposite the core language and the std library of C++ are, and even more so in the recent revisions! The core language seems to go for being as code-golfy as possible, while the std library goes for super-verbose without any convenient shortcuts

The core language adds ascii character notations optimized for brevity.

The library instead makes things that should require 1 argument take 2 or 3 arguments (which ranges will hopefully fix), makes value to string conversions as verbose as possible, etc...

The core language is much harder to work with than the library, thus the difference.

The library can be posted as code and thus we can test it thoroughly, and discuss it based on a real implementation.

Contrast that to the language. It isn't code, it is standardize. It is written by and for lawyers in the negative sense of the word lawyers. Anything you do in the core language is hard because there are too many ways to accidentally (or intentionally) slip something evil past everybody.

C++ takes backward compatibility seriously. If I didn't have a 20 year old code base of working code to deal with I'd be working in Rust, D, Go, Java, C#, or whatever the language of the day is. (Note that because of the amount of users and time C++ often has better optimization, but if that fraction of a percent better matters you can invest in optimizer work for your chosen language and solve the problem)

That's pretty idiomatic in Python and often is fine. But surely can be abused. Pathlib let's you use / to append paths. You're not dividing anything but it's still quite intuitive.

And python sets let you subtract to get a difference, similar to your clever trick. I think it helps when these things are core to the language and broadly known. Otherwise you're scratching your head at what - will do.

The rapid change in C++ syntax may destabilize it.

For example; `auto OpenFile(const string& filename) using future_coroutine<File> [filename]`. Okay, I get it. Or the operator `[<-]`.

But as the idioms find their way into production code - an entire important segment of the C++ developer base is going to no longer be capable of reading the language. And that might be worse than not having features.

I spent a decade watching C++ painfully slowly add more advanced template support and... well, that's about it. After a million iterations of "for (mylisttype::const_iterator i ..." they finally added ranged for loops.

The rate of change seems to be increasing. This is initially a bit shocking until I remember that every single job I had was like using a completely different language because, even ten years ago, every shop used a different subset of C++ and STL!

Add all you want to the language, it's all going to devolve into code-review infighting the same way it did in 2008.

This is a common complaint about C++, as if the other languages were any better in this regard.

Java 11, C# 8, Python 3.7, Fortran 2018, F# 4.5,.... try to write in a way that everyone will agree with, or do language quiz to see what which one gets right.

Sure there is C that looks so simple, who gets right all the ISO C changes between C89, C90, C11, C17, UB and compiler specific extensions/issues without having to look into the books?

> I spent a decade watching C++ painfully slowly add more advanced template support and... well, that's about it. After a million iterations of "for (mylisttype::const_iterator i ..." they finally added ranged for loops.

C++ was first standardized in '98. Nothing happened for more 10 year (in particular no new advanced template support) unless you count the very minor bug fixes in C++03. The next revision, C++11, added ranged for loops.

Compilers don't strictly adhere to standards, and I'm far more concerned with things like whether my compiler crashes, how godawful the error messages are, and whether the bundled STL even functions reliably.

Haha, you're too crusty to be listened to!

Yeah, you're right. In some respects, things haven't changed.

Hah, those code reviews must be fun. At least as much as a Scala one ;)

If I understand things correctly, the C++ people are attempting to modernize the language and add a bunch of new stuff with the intention of deprecating the old stuff. They're constrained by legacy cruft so the new features end up with rather ugly syntax. I applaud their efforts but frankly I'd rather see a new language done right.

I'm currently trying to modernize my C++ skills and while I love some of the new features I think they're creating an inconsistent, ugly, beast of a language.

C++ will always have a place because it's a language that literally has (or will have) everything. I've used features of C++ that have no analogy in other languages. If I designed a language tomorrow, I wouldn't put that feature in. Too obscure. But I needed this random feature for a project on a constrained device to do C callback trampolines to C++ object methods.

When it comes to C++, there is no magic alternative that is "done right". I think that Rust may be the closest C++ alternative that will get in our lifetime and it's syntax isn't exactly non-ugly either.

It has far from "literally" everything. It doesn't have async/await, doesn't have a sane way to create simple web servers, doesn't have map/find/filter/etc on collections, at least not a version that won't span multiple lines or the whole screen because they require mandatory arguments (begin, end) that are useless 99% of the time because you want to operate on the whole collection anyway, etc.

> It doesn't have async/await,

That's just a C# idiom, and far from being the right or sane way to handle concurrency.

Meanwhile C++ has active objects and futures and other concurrency design patterns.

> doesn't have a sane way to create simple web servers

Check POCO.

> doesn't have map/find/filter/etc on collections, at least not a version that won't span multiple lines or the whole screen because they require mandatory arguments (begin, end) that are useless 99% of the time because you want to operate on the whole collection anyway, etc.

That's a very silly complaint. You acknowledge that C++ does have map/find/filter (which at this stage is rather obvious to anyone who ever used C++) but somehow you're complaining that C++'s interfaces require programmers to specify where the collections should start and finnish.

> but somehow you're complaining that C++'s interfaces require programmers to specify where the collections should start and finnish.

I think it is a valid complaint. Most of time you want to apply something like "find" to the whole collection.

I'd also love to have C++ standard maps (esp. unordered_map) where I only pay for what I use. Namely, due to C++ spec I have to pay for the nodes instead of having a flat, CPU cache efficient, buffer. In other words, a different map type where other items are allowed to move in memory when something is inserted or removed.

Oh, while we're at it, it'd be nice if vectors etc. could realloc instead of allocating a completely new memory region and freeing the old one when more space is required.

A more ergonomic allocator story for specifying alternative allocators would also be very nice for some niches, like when you need to avoid fragmentation or require better performance in embedded development.

> I think it is a valid complaint. Most of time you want to apply something like "find" to the whole collection.

frankly, this is a very overstated problem. It's trivial to write a header with such functions (e.g. https://github.com/OSSIA/libossia/blob/master/OSSIA/ossia/de...). And it's already provided by Boost if you use it.

Yeah, it's trivial to do that.

But since it's not in STL, everyone will use a bit different implementations for something this mundane and common.

I'd rather say, (mostly) everybody will use Boost.

Or folly. Or something else. Which leads to the awful package management story. (Maybe modules will help.)

Memory allocation by STL has been broken for as long as I remember, which is at least twelve years or so. It's been a while but if I remember correctly it doubled its allocation every time it ran out or space.

> It's been a while but if I remember correctly it doubled its allocation every time it ran out or space.

why would you call this "broken" ? That's one of the most optimal solutions for reallocations if you append values continuously, since it will reallocate only logarithmically.

Which libs _don't_ use that allocation strategy? The fact that you think it's broken only means you don't really know what you're talking about.

what? ha, that is the text book approach to reallocating an array for additional values to get log complexity

I'm not so sure I'd want a 2GB vector to grow to 4GB, etc.

In such extreme cases you can (and should) use your own allocation strategy.

Yeah, but I'm not entirely clear how I'd for example get std::vector to use a mmap'ed range, that I could zero-copy grow.

Also in that case I might want to grow it by a fixed amount, once it grows past a certain threshold.

It's a shame one needs to write another vector implementation just for that.

Why wouldn't writing an allocator class be enough for mmap?

Probably not, because the allocators don't expose the "extend" or "realloc" concept, so all you'll see from within your allocator is a request for an entire new region that vector wants in order to copy the new stuff into.

To implement a growing allocation , you'll have to create a custom vector, pretty much.

How would you ensure nothing was used in the region after the original allocation?

O(n^2) time for n push_back calls would be worse, so some sort of multiplication/exponential growth is required for the "fits all" default vector and its default growth strategy.

Old versions of MSVC used a 3/2 growth factor which led to less extreme jumps for large vectors, although also a lot more copies when filling a small vector. I'm not sure if that's still true.

Why not?

I think you mean amortized constant complexity, or O(1).

this is implementation specific. Care to tell which compiler are you talking about?

> That's just a C# idiom

Not just C#, Python and JS also have it already. And async-await is the first time ever where I've thought that yes, this is a good way to do single-threaded async (event-queue-ish) or even multi-threaded things.

> Check POCO.

Thanks for the hint, I will check it out.

> but somehow you're complaining that C++'s interfaces require programmers to specify where the collections should start and finnish.

Take a look at: http://www.modernescpp.com/index.php/higher-order-functions The c++ functions are unnecesarely verbose compared to other languages.


    transform(vec.begin(), vec.end(), vec.begin(),[](int i){
        return i*i;
could be much more readable as

    vec.map([](int i){ 
        return i*i;
Also makes for-of the better choice in my opinion because if you're going to write more code, I'd rather write code that is readable.

Those examples aren't really the same though. Transform is a free function which operates on anything that satisfies 'ForwardIterator', map would be a member function of the vector class.

Theoretically you could add helper functions to all suitable containers which calls through to transform (I think this is what you're suggesting). But it would feel very heavyweight and add a lot of cruft to the std library.

You could also add your own utlity free function similar to

  template<class T, class F>
  void mymap(T &t, const F &f){
      std::transform(t.begin(), t.end(), t.begin(), f);

> could be much more readable as vec.map([](int i){ return i*i; });

That, however, violates the separation of concerns principle by mangling together data structures and algorithms.

Rust is also getting async/await

Which de-sugars to generators, which is similar to, but different from, the various C++ coroutine proposals. I haven’t dug into this one yet though...

It’s going to be pretty important for us: http://aturon.github.io/2018/04/24/async-borrowing/

What you want is ranges v3.

That only saves a few characters. And "map" is already a data type in C++. How are you going to explain to a student that ambiguity, that you can "map a vector, but oh yeah map is also this container type" ?

Conceptually, I hate the word "map" to describe a transform. What does a real-world, paper map do? It associates locations drawn on the paper with real-world, physical places. That doesn't have any commonality whatsoever to morphing a group of objects. The term is misused.

It removes enough characters to turn it from unreadable mess to legible code. Point is mainly to remove the 3 completely unnecessary parameters, doesn't matter if it's called transform or map.

> C++ will always have a place because it's a language that literally has (or will have) everything.

I don't think that's true at this stage. Nowadays, beyond legacy code, C++ is used mainly in embedded, number-crunching, and cross-platform GUI applicationd, and that's about it. Meanwhile the world has moved on to web services and web apps, where C++ is nowhere to be seen, and other competing lower-level programming languages are starting to eat away C++'s lunch.

Most of my work is done in Java, .NET and JavaScript, C++ is the only native alternative that I can easily plug into all of them when I need native interop.

Go is not an option until they get their story straight with 2.0.

Swift is nice, but really it will never go beyond being an Apple platform language.

Native/Kotlin is doing its baby steps.

D seemed like a nice alternative, but they are such a small community without any big corporation support, that the language might already have lost its opportunity.

Rust would be the best alternative, but it still lacks the tooling integration story regarding IDE, graphical debuggers and OS vendors support.

Yes, no one is doing pure apps anymore in 100% C++, Java and .NET are slowly migrating to bootstrapped runtimes, but the language is not going away for the foreseeable future.

Kids are learning to program on Arduino devices using C++.

Microsoft already has some ongoing Rust projects, but Visual Rust is not yet here.

AAA Games, compilers specially LLVM and GCC, OS, HPC, Fintech, deep learning, GPGPU shaders, GUI composition engines, IoT, medical devices, car infotainment systems, VFX software might be a niche, but it is a very big niche.

One of the good things about polyglot development is that one doesn't need to silo himself/herself as developer X and be worried if language X is suitable for full stack development across all domains of computing.

EDIT: typos

> Go is not an option until they get their story straight with 2.0

Would you mind expanding on this point? Where I work, we do lot of C++ for the reasons you described but I have started to learn Go to see if its a suitable replacement.

From my point of view, no generics, error handling boilerplate, no real enums (although there is a kind of workaround).

There are other issues regarding tooling, specially for the use cases that I care, like dynamic loading into other platforms (JVM and .NET), GPU programming and graphical debuggers.

So naturally many C++ dev did not felt at home with Go.


Those pain points are the biggest roadmap points on a possible 2.0 version.


C++ (Rust as well) is for high complexity problems that need high performance solutions. It's not just a few narrow application areas. For example, all major HTML and JS engines are also written in C++. I wouldn't call them (just) GUI applications. I bet that all offline renderers for professional CGI are also C++.

Don't forget high performance games; which is a huge market, highly complex, and very competitive.

It's already written, but things require absolute efficiency and high performance, like HPC and ML cannot be done without C++.

I develop some HPC code, and most of them is infeasible without C++.

Other languages' high performance libraries PIL and numPy is written with C & C++.

NVidia GPUs are now designed explicitly for running C++ code efficiently.

CppCon 2017: Olivier Giroux "Designing (New) C++ Hardware”


Assuming another systems language, e.g. Rust, does indeed replace C++'s use cases, it is still a very long path until it gets this adoption level from hardware vendors.

>C++ is used mainly in embedded, number-crunching, and cross-platform GUI

You forgot a couple: your OS, your web server, image and video codecs, and basically any other low-ish level abstraction you rely on. The world isn't built on web apps.

I agree with you except that there is sizable chunk of back-end code that is using C++ or at least calling it.

Beautiful languages don't get adopted unless there is an OS vendor stating "It is my way or go elsewhere".

The languages with higher success rate are the ones that build on existing code and practices.

Every language with major market adoption is an inconsistent unless one does a language reboot Perl 6/Python 3 style and is willing to push it forward.

Do you have an example of a beautiful language that adopted due to vendor stance? I cannot call C# beautiful. Perhaps Swift?

I would say C# was quite beautiful in 2001.

And yes, Swift as well.

Kotlin might eventually be another one, depending how the Java story will evolve past Java 8 on Android.

I don't know about vendor stance, but Elixir feels fairly elegant (better descriptive word than beautiful, I think.)


> I applaud their efforts but frankly I'd rather see a new language done right.

Opinion of mine is what's needed is robust and performant inter-op, and only then a new dialect that fixes some of the deep legacy problems with the language. At least that would provide a clean migration path.

They are adding more syntax to reduce syntax. Have you seen the spaceship operator? C++ has jumped the shark. I just want an ABI. Can we get an ABI already?

It is called COM/UWP on Windows.

A C++ ABI requires an OS written in C++, likewise there is no such thing as C ABI per se, rather UNIX ABI, Win32 ABI and so forth.

A C++ ABI does not require an OS written in C++. It just requires all compiler implementations to agree to standardize on a single ABI and enshrine it in a spec that any future implementations must abide by.

A C++ compiler that needs to support multiple distinct ABIs?

One per each platform, plus an additional one, agreed by all these guys? And the list is not 100% compliant, e.g. TI is missing.


Not even C has managed to do it, what people conventionally refer as C ABI is actually the OS ABI.

I still remember having to deal with multiple C ABI on MS-DOS and Windows static libraries.

I'm really interested to know how a standard ABI would cover both little endian 32 bit x86 and big endian 64 bit POWER.

Compilers already have ABIs, they're just compiler-specific and not documented. If they didn't have an ABI, then you couldn't link 2 C++ libraries together.

In terms of having one spec that handles arbitrary architectures, maybe it could be defined relative to the C ABI? I don't actually know how what the internal ABIs used by the current crop of C++ compilers looks like, so I don't know if they could be expressed relative to the C ABI or not.

Alternatively, one could come up with a set of configurable values that define what makes one architecture different from another (such as word size, long/pointer size (if different than word size), endianness, etc). You know, the stuff that the compiler already has to know about in order to compile for an architecture (and I'm pretty sure clang at the least already has a way to even define new target triples in terms of a set of configurable values for this sort of thing). I assume existing compilers don't invent entirely brand new ABIs for each new architecture but instead just adapt an internal standard C++ ABI to the new architecture using a similar process already. The same could be done for a public standard C++ ABI.

Pardon the dumb question, but is the term ABI new (or experiencing a recent increase in usage)? I've seen it twice in the last 24 hours, and never before in my life. And that's despite being entirely too familiar with COM.

> but is the term ABI new

No, it is age old. The thing is that it is only "a thing" to embedded native developers and library authors.

The general concept is that the programming language "API" doesn't define how the dll/o/a/la/lib/so/dylib "loader" look for "symbols" in the file. If you example, I do:

    enum {FOO, BAR};
and next month I change it to

    enum {BAZ, FOO, BAR};
then everything will still compile, but if I execute an older program with my library, it still thinks FOO is 0 and BAR is 1 while it's no longer the case. It gets worst for function and type definition. Then when you get into C++, it gets an order of magnitude more complex because almost every code change will somewhat break one or many symbols. This is why there is often "proxies" called PIMPL or d_ptr between the library symbols and the actual implementation code. C++ has no officially standardized way to serialize the object to their symbol names. This is often a pain point and Apple wants to fix this for a decade for has so far failed to get everybody to agree on something.

For embedded devs, more problem arise from the fact that the instruction set and C library themselves are not the same for all devices and this also break pre-compilled code.

Many platform will often favor "static binaries" over dynamic ones to avoid the consequences of breaking the ABI.

I think you see it because there has be a resurgence in interest in C/C++. probably because people are giving up on the idea that all new languages will run on the JVM.

ABI is ancient since it defines the underlying structure that allows you to link compiled object files and liberals together (the latter both statically and dynamically). Allowing code generated in one module to successfully call functions in another.

> Allowing code generated in one module to successfully call functions in another.

That's not quite true. That's already possible without a standard ABI.

What's not possible is to get code compiled by a specific compiler to call modules built by another compiler provided by another vendor. That's the only problem that is addressed by a common ABI.

I would also add that considering MS's interoperability history and how prevalent MSVC++ is, working on a standard ABI would be a waste of time.

> What's not possible is to get code compiled by a specific compiler to call modules built by another compiler provided by another vendor. That's the only problem that is addressed by a common ABI.

Yes, compilers need to agree on the ABI, the platform ABI, that is. Today most platforms already have a de-facto or de-jure standard ABI and compilers have the option to implement it.

Even if such a thing was possible, there is no value in the standard committee standardizing an single ABI as most platforms would never break backward compatibility to switch to it.

No, it is an old term.

Yes, an ABI would be great. I am always amazed how easy it is to use libraries in C# or Java whereas C++ is just a PITA.

That's possible because all C# and Java compilers only really target a single platform (their respective VM).

It is easy as well, when you use a compiler that generates bytecode like C++/CLI.

Ummmm... Try running java 8 bytecode on an older JVM.

[<-] "Our proposed spelling is intended to suggest unwrapping"

I'd argue co_unwrap() has spelling that suggests unwrapping.

I still have to read it through, but why does it require new syntax and not just new standard library additions like boost::coroutine?

I just fell in love with boost::fiber so i hope there will be also something like it in the future std.

This could be completely off-base but:

- Clarity and establishing convention. Boost coroutine2 uses a lot of boostyness to make using coroutines in c++ less painful, but they still require boilerplate and can be quite difficult to write. Understanding code that uses coroutine2 also requires understanding how the coroutine2 library works and how it uses normal c++ syntax in unique ways.

Having a dedicated operator like [<-] arguably produces more familiar looking and clearer user code. (Take a look at the generator example in the appendix of this proposal; the implementation of the 'generator' type is horrific but the 'Traverse' function is pretty easy to read.)

- Generalization. Boost::coroutine2 targets coroutines. This proposal tries to generalize aspects of the coroutines proposal such as the unwrap operator [<-] to support use cases like linear monads.

This doesn't seem to be a coroutines proposal so much as an attempt to sneak (Haskell's) `do` notation into C++.

Which I applaud, it's a brilliant feature.

I don't understand the point of co_*. Why should you have something as ugly as this? Just use std::coroutine?

They are supposed to be keywords like return or throw. Those cannot be namespaced either.

And yet this proposal still doesn't let you implement zero-overhead objects-as-coroutines. I'd like to be able to create a coroutine that initializes some state ("constructor"), yields back to the caller, and lets you call alternate re-entry points ("member functions") that manipulate the state and yield back, or the final entry point that cleans up the state ("destructor"). If the initialization fails for whatever reason, the yield will simply be skipped. Crucially, I want to be able to do all of this without indirect jumps other than returns.

Serious question, what's the difference between this and a class?

I guess when you call an entry point the second time it restores the existing context at the point of the previous yield. So you have mix of shared state (class members) and local state between the various entry point. Just a guess though.

If the local state was accessed across a suspend point that would be undefined behavior. So it would have to be spilled into the coroutine frame. So it would be the same as a class.

If you try to implement a class in C, you always end up with duplicated cleanup code between your "constructor" and your "destructor". Being able to jump from the constructor to the corresponding part of the destructor when the construction of a subobject fails would get rid of this often buggy/untested secondary code path.

Some idioms have been developed over time, such as first initializing in a failsafe manner all the object's fields to a value that prevents deinitialization, but they add complexity to the code where it is not needed.

Bikeshedding alert: If a linear monad use case is considered and a do-notation-like syntax is proposed, why isn’t a left-bind =<< operator introduced by nature.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact