Note that generators are already implemented, and underpin the async state machine/stackless coroutine transformation as an implementation detail. However, there's still a few things to iron out before they can be stabilized, and I'm happy to see work there proceeding.
"may" does the same job as async/await (eg, happily support thousands of network threads on one native thread), runs at about the speed as async/await https://www.techempower.com/benchmarks/#section=data-r21 but the code looks completely "normal" - so special no keywords are needed, and almost none of the usual Rust async/await complaints apply. "Almost" because one does apply: may introduces coloured code, just like async/await. (If it didn't, it would run slower than async/await.)
Also, there is: https://github.com/rust-lang/rust/issues/111272 Translated: co-routines currently use malloc to allocate stacks, so if you overflow the stack you get the experience the C like joys of random memory corruption, with no warning. Why it's implemented like that is a bit of a mystery, given Linux and BSD offer ways to create fast "safe" stacks, and in fact this is the very mechanism used every native thread stack, including Rust's main stack: https://man7.org/linux/man-pages/man2/mmap.2.html (see: MAP_GROWSDOWN). Anyway, as may uses the stacks from co-routines it also suffers from this problem.
Hopefully, this will now be fixed. Generator's are interesting, but somewhat esoteric. Event driven I/O is now main steam, and as Go demonstrates green threads are the least painful way to do it.
I've not done anything too interesting with Rust, but I've had sort of mixed opinions of Generators. They can obviously be very cool, but it also does kind of feel annoying how much my code can just jump around in ways that, in my opinion, are a bit difficult to predict.
That said, when used right they can do a lot to decouple logic, so maybe this is just some of my old bad habits being throughly unwilling to die.
It's not really any different from an iterator with some maps and filters slapped on top of it. Or how generators work in e.g. Python.
Async makes it worse. I'm currently debugging an infinite loop where apparently it's collecting an in-memory data source into a single Vec, but keeps forever entering&exiting a future with no progress...
This is great, the amount of boilerplate it takes to write an iterator is a bit cumbersome right now. Looking forward to see how this fares as it moves towards stable.
It's impressive what Rust is able to do with the guarantees that it provides. Generators and coroutines seem like more ways to take advantage of async which allows for non stack-based control flow. Without those guarantees, innovating on async would be very dangerous without resorting to GC or dynamic typing.
No one discusses how with new releases old code breaks. I can compile C code from 40 years ago. I can't compile code in Rust I wrote from 4 months ago. I feel Rust will always be a moving target. I am getting very discouraged.
This isn't really true with stable Rust. Rust uses a system of "editions" for crates, where newer compilers are directed to parse the code using the older rules and it can freely interoperate with code written for newer editions even in the same binary. If you use nightly features then sure, that can break, but once a feature is stabilized its functionality is guaranteed for many years.
I remember seeing a fix that was introduced which was technically a breaking change to stable, but only in obtuse edge cases nobody actually used. The entirety of crates.io still compiled.
So breakages can in theory happen but yeah I don't think they do in practice.
Rust has fairly strong backwards-compatibility guarantees, so that sort of claim surprises me. Can you give an example of the code you wrote four months ago that won't compile using the latest stable toolchain?
fn foo() -> impl Iterator<Item = u32> { gen { yield 42; for x in 3..6 { yield x } } }