Hacker News new | past | comments | ask | show | jobs | submit login
A four year plan for async Rust (without.boats)
215 points by steveklabnik on Nov 7, 2023 | hide | past | favorite | 228 comments



This is a really interesting post, and it's predictable if disappointing the degree to which the comments are rehashing all of the same tired arguments about async rust.

I for one am pretty satisfied with async rust, and I'm excited for the stabilization of async-trait. I'd love to see some of the improvements discussed in this post come to fruition. Generators in particular are something I've found myself wanting on multiple occasions, because writing custom Iterators is relatively complex.

The point about return-type notation is really interesting. Once async-trait is stabilized, we'll probably go through and rip out as much usage of the `async-trait` macro as possible, so I'll be curious to see how often we run into the issue described there. I also really like the idea highlighted in the blog post of adding async sugar to function closure types as part of expanding the support for async closures, e.g.:

    where F: async FnOnce() -> T
    // rather than
    where F: FnOnce() -> impl Future<Output=T>
Personally, the lack of good support for async in closures remains one of my only issues with async, just because we often write code in a more "functional" style, and whenever we're dealing with complex async stuff we often wind up having to drop out of that.


> AsyncIterator and async generators

For what it's worth, Dart has had synchronous and asynchronous generators (including `await for` statements) for as long as its had async/await. They are neat features. I've definitely written code using synchronous generators that would be hard to manually transform into a custom Iterable implementation.

But they add a large amount of complexity to the language implementations and it turns out are very rarely used in practice. Here's a quick scrape I did of the most recently published 2,000 packages on our package manager:

    -- Style (64317 total) --
      59409 ( 92.369%): normal  =================================================
       4842 (  7.528%): async   ====
         42 (  0.065%): async*  =
         24 (  0.037%): sync*   =
Async/await is clearly pretty useful with there being one for roughly every ten normal functions. But generators and async generators are barely used at all.

Rust might be a in a different situation because efficient concurrency may be much more important in a systems language, but it's not clear to me that those same features carry their weight in Dart.


I'm not sure "percentage of usage" is a metric you can use to fully decide how "useful" something is. Take macros in Clojure for example, usually you just have a few of those, but the ones you have are really useful, they give a lot of functionality and practicality to the language. Maybe the same thing is happening with sync/async generators?


Macros are a really good example. A world without the equivalent of `#[derive(Serialize, Deserialize)]` would suck, but it's just a few characters and I almost never write macros personally.


That's a good observation about a lot of language features, yes. Macros are a particularly good example.

But for a feature like generators which is, I think, essentially user-facing syntactic sugar, I do think a count of usages is a pretty good measure of usefulness.

It's probably also worth noting that my data here is from published packages, which tends to skew towards reusable libraries and away from application code. So, if anything, I would expect this to be an over-count of their use if they primarily made libraries/frameworks useful on behalf of application developers.


> I think, essentially user-facing syntactic sugar,

One thing to remember is that Rust is a bit weird here; often times, features like this are "sugar" in a sense, but that sugar lets you write safe code, whereas the non-sugar version would force you to write unsafe. This means this kind of thing is a larger win in Rust than it would be in other languages.


Yeah, that's a good point.


A big difference between Rust and other languages like Dart is that our async/await is based on the same sort of coroutine transform as generators would be, rather than continuations, so its a lot less additional complexity to add generators compared to another language that has both async/await based on continuations and generators based on coroutines.


I've just used an async generator when upgrading to aws-sdk v3, the function lists all the items in a s3 bucket, if its truncated then it yeilds a recursive generator.

Stops when it's got all the keys.

First time in 5 years writing typescript I've actually used a generator.


Generators (either variety) in JavaScript are flawed in their implementation in that they only went halfway with it. They also needed to make all primitive objects iterable as well, and/or they should have introduced iterable helpers for all data structures. The fact that you can't use an object in a `for` loop without implementing Symbol method(s) is a glaring hole in their every day utility. It makes it non-obvious to reach for in many cases

The other to me is some of the semantics of generators are not well thought out, for instance, you have to call `.next()` _twice_ in order to get the first `yield` value, and how arguments to `.next()` should be used correctly is opaque. This combined with the fact `yield` doesn't follow the same lexcial scope as `this` does (IE, you can't have an arrow function yield if it was enclosed by a generator, unlike `this` which can be used inside an arrow function as reference to its enclosing scope. This would make generators far more useful IMO).

Combine all this with the fact that most frameworks don't support generators natively for things like components (but they are starting to accept Promises / async functions) you end up with a relatively niche feature


Generators are one of those things that pop up more often in library code than in application code, I've found.


For what it's worth: thanks to Dart's async/await and async* and sync* , I find it incredibly easy to write performant, clean concurrent code in Dart. I've implemented an application (very IO heavy, highly concurrent) that is half Dart, half Java (for reasons) and the async code in Java is atrocious, while the Dart code is just beautiful.


I do wish discussions like the linked blog were more clear on specific use cases features would enable. Async summation shows off the syntax sure, but I don’t know any real world problems that are currently hard that this would make easy. Not to say there aren’t, just I don’t know them and would love to understand!


TBH I think that, for the most part, I will only benefit from a few of these - mostly in terms of sugar. I routinely have positive experiences with Async rust and basically never have negative experiences/ issues that crop up because of it. In 3 years of writing Rust full time I had one async problem one time - I accidentally was causing an infinite select! loop in a tonic server, so the server would hang. Not really a big deal for 3 years of async work. I could have done the same thing in sync code just as easily tbh.

I also don’t think that async-drop as quite as necessary or desirable as it may seem. I really wanted it at one point and then I realized that it’s just too tricky. It reminds me of how File calls sync_all on drop but ignores the error - ultimately, “drop” is just a really tricky place for anything complex. I’d rather see a linear type, like:

    struct LinearFile {
        fn close(self) -> Result<File, std::io::Error> { self.sync_all(); File { inner: self.inner } }
    }
Where Linear can’t be implicitly drop’d, you have to .close() it, get a File, and File can be dropped. Hand wavy and not necessarily a good implementation but hopefully this is getting the point across. This would be preferable to shoving more into a drop impl when drop is such a constrained interface.

Or maybe add a try_drop(&mut self) that will run implicity but also ? implicity. I don’t know.

I guess the point is that I’m not sure an async drop can ever be worth it.

TBH I feel like ~30% of people's complaints about async are solved by:

a) Encouraging a sync + async API in libraries

b) Adding `block_on` to the stdlib, which will help with (a)

People mostly seem to care (and imo this is stupid but whatever) about using async in sync contexts when they don't want to, and they don't seem to know that you can just block.


I wish that some of these folks complaining about how rust libraries force you to use async would go out and write some equivalent libraries with synchronous APIs. Rust's philosophy has always been to let library authors explore a problem space before (if ever) pulling stuff into the stdlib, and I think it is generally only beneficial for the ecosystem to have multiple ways of solving a problem.


I think it goes to show that the vast majority of Rust devs don't actually care about this problem, otherwise they'd be doing that already. Like, providing a sync API over an async API is almost always as simple as:

    struct SyncThing {
        async_version: AsyncThing
    }

    impl SyncThing {
      pub fn sync_api(&self) {
         block_on(self.async_version.sync_api)
      }
    }

If this were really such a huge problem I think we'd see more PRs. I get that there's some survivorship bias here, but still...


Is that the same as the "Unforgetable types" mentioned in the blog?

I never thought about it but, my only problem with C's manual destructors is that the compiler doesn't check that you remember to call them. If we accept a language like Rust that does sophisticated static analysis, it doesn't need to delete things for you, it only needs to be sure you have a plan to delete everything.

That's interesting.


> Is that the same as the "Unforgetable types" mentioned in the blog?

Pretty much, yes.

> I never thought about it but, my only problem with C's manual destructors is that the compiler doesn't check that you remember to call them.

Right, so in this case the compiler would force you to consume the value somehow. This is "linear" typing.


I was a huge opponent of Rust, but finally decided to give it a try in anger once again.

I began writing a large application, and noticed that many of my libraries only offered async versions, and the promise was quite appealing -- not having to worry about threads or concurrency as long as I followed certain rules. What I ended up with was an incredibly slow application because of the limitations of Rust async, and the runtime. All of my I/O got pushed through one thread (with tokio), and that, plus scheduling overhead became my bottleneck. Debugging this was a nightmare in writing my own tracing tools.

20/20 hindsight, I would not write my code to be async, and would just prefer threads. I'm really not sure how / why async took off the way it did.


> All of my I/O got pushed through one thread (with tokio)

In all my use of Tokio in the last few years, I never heard of such a thing.

In tokio::fs::File [1] it calls spawn_mandatory_blocking to do file writes, I assume this is similar to spawn_blocking [2] which sends a task to Tokio's blocking thread pool. That thread pool is supposed to max out at 512 blocking threads [3], unrelated to CPU core count.

Tokio's TcpStream appears to be built on mio's TcpStream. I didn't dig deep into the code for this, but it doesn't just call spawn_blocking, and I'm assuming on Unix it ultimately registers the socket with epoll or equivalent, so it never blocks a thread to do socket reads or writes.

Could you share more about your application?

[1] https://docs.rs/tokio/latest/src/tokio/fs/file.rs.html#682

[2] https://docs.rs/tokio/latest/tokio/task/fn.spawn_blocking.ht...

[3] https://docs.rs/tokio/latest/tokio/runtime/struct.Builder.ht...


So, it wasn't actually synchonrizing all of the I/Os onto one thread, but I wasn't getting any parallelism due to the amount of time required to dispatch I/Os. Essentially, my program was highly concurrent, but I wasn't able to get any I/O parallelism because each syscall was pretty cheap, and the cost of dispatching each I/O was too expensive.


In these debates I find myself very confused. I feel very comfortable with async ergonomics in Rust. It seems like the argument against it is that it has poor ergonomic... but how? It's rather straightforward to go from sync to async and async to sync you just have to code the strategy in. If you're in sync and need to run async, you need to run it in some executor (either in threads or in single-thread concurrency or something else). I've been writing async code in Python for years I'm truly missing what's so bad -- or even different -- about Rust async. What am I missing? I'll continue to write async code because it's pretty nice.

EDIT: People talk about swapping executors etc but it seems like 99.999% of the programming applications you don't need anything like this, nor does something like Python supports this anyway and we're all fine with it.


Just noting for the sake of countering this narrative that async rust is fundamentally broken that I feel the same way as you, having been working with async rust professionally for 2.5 years.


> It seems like the argument against it is that it has poor ergonomic... but how?

It depends on what you're comparing it against. If you are comparing Rust async against other languages, then the major difference is Rusts borrow checker. In every implementation of async, you effectively move whatever state is needed off the stack and into a separately allocated memory area. In Rust that separate area is a closure.

This interacts badly with Rusts borrow checker. The borrow checker needs to know the lifetime of any object you deal with. It has two "base truths", by which I mean life times it already knows about that you can derive other life times from: static and the stack. If they don't suffice you have to handle life time management yourself and at run time using Rc or Arc or something. Being forced to do that complicates your types and slows the code down. The root cause is async in Rust removes one of those two base truths: the stack. So now you are forced to write that ugly manual life time management code far more often.

This is unique to Rust. Every other language I know of that implements async has garbage collection, so while it remains true they also move stuff off the stack it doesn't change anything. You still use the same types, and apart from sprinkling async's and await's here and there and indenting your closures, your code remains the same.

This is also why I think green threads are a much better fit for Rust than async. Under the hood green threads and async are very similar: they are both ways of doing event driven I/O. Their performance characteristics are near identical. The main difference is while async forces you to move your state to a different area, green threads you do it as before and store it on the stack, just like normal code. In fact green thread code looks identical to normal code. The only change you have to make to convert some code to green threads is change the name of the I/O calls to use non-blocking versions (which is something you also have to do with async, of course). But since the stack is still available all those fights with the borrow checker async creates go away, as does all the extra syntax async requires.

It doesn't come for free of course, so the run time performance of green threads and async is not absolutely identical. In async every task shares the one stack, whereas in green threads they each get a new one. This creates some extra memory overhead, chews up considerable address space (which normally isn't backed by memory) in order to protect against stack overflow, and it costs a bit more to set up a stack. But once a task is setup green threads are going to be bit faster you aren't moving stuff and and off the stack and you don't get hit with those additional run time life time checks you were forced to introduce for async. Mitigating green threads overheads somewhat, a process that is handling 1000's of concurrently connections is unlikely to be running one a machine that is memory constrained, so the extra memory probably doesn't matter. (In reality a Raspberry Pi with 4GB of memory can handle 1000's of stacks.) And it's likely to be a 64 bit machine where address space is nearly free, so the "considerable extra address" space also doesn't matter.

Still, I can think of once place it does matter. You can have two styles of generators: ones take the async approach and ones that take the green thread approach (ie, allocate an extra stack to each generator). Rust nightly does have generators and they currently take the green thread approach(!). But, generators tend to be short lived. You tend to use them to iterate over an array, rather than serving a web request (the typical use a task is put to). That means the overhead of creating the stack for a green thread isn't a small proportion of the time a long running task takes, but could well dominate the time it takes to iterate over a small array. Thus async is a much better fit for iterators.

Currently Rust has this arse about: it has async for long running tasks, and green threads for generators in nightly. With this announcement it looks like this will be 1/2 fixed. Great!


Without commenting on the rest of this post:

> Every other language I know of that implements async has garbage collection,

Small note: C++ also has it these days, and does not have GC. You are (as far as I know) left to your own with object lifetimes, as with non-async/await.


> But, generators tend to be short lived. You tend to use them to iterate over an array, rather than serving a web request (the typical use a task is put to). That means the overhead of creating the stack for a green thread isn't a small proportion of the time a long running task takes, but could well dominate the time it takes to iterate over a small array. Thus async is a much better fit for iterators.

The current async approach requires the compiler to statically calculate a fixed size for each invocation context. Except when it can't, in which case you have to dynamically allocate space for each invocation. That same logic could be used to transparently optimize stack creation, opportunistically avoiding both a pessimistically large stack and the costly guard pages. The compiler could even choose to instantiate the generator stack on the caller's stack, just as today.

async Rust optimizes the common case but completely neglects the hard case. In theory Rust users should be able to have their cake and eat it too, especially given that the difficult static analysis work already exists to support the current async model.

(I've made this point before and feel like I may have forgotten some counterpoints. I apologize in advance if that's so.)


Rust had green threads prior to 1.0, and they were ripped out for performance and other reasons. Probably worth revisiting those discussions to see where this has been thought through by the Rust team


> Probably worth revisiting those discussions to see where this has been thought through by the Rust team

The issue generated a lot of hot air at the time, far more than I have time to read, but I think I have the gist of it.

The performance issues were due to an implementation choice. All green thread implementations I'm aware of hide the code colouring event driven I/O introduces, and Rust did the same. The hiding has to be done at run time. That translates to every I/O call has at some point chose the blocking or non-blocking implementation. This is typically done using vtables (dyn in Rust), but whatever mechanism is used, it introduces runtime overhead. Worse, it slows down everything - including code that doesn't use green threads. It gets radically worse if you try to hide C calls blocking.

Async has the same issue, some solves it by making coloured code the programmers problem. If they had of made the same design decision for green threads the performance issues go away. We don't have to speculate about that. There is a green thread crate out there called "may", then has independent benchmarks covering it, async Rust implementations, and other languages. "may" beat everything at one point, but it's a very noisy benchmark so the only conclusion I would draw from it is that "may" looks to run at the same speed as async.

As for the rest of the issues: they revolve around needing a separate stack. I tried to cover the trade-offs above. Summarising: green threads lead to simpler code that's easier to write and theoretically could run faster than async, but have larger setup overhead and use more memory.


That’s part of it, but it is also important that rust be an embeddable language. Ideally you should be able to replace a small component of a larger C or C++ program with Rust. Having a big fat runtime you’ve got to include before running any rust code makes that pretty much a non-starter.


> Having a big fat runtime you’ve got to include before running any rust code makes that pretty much a non-starter.

Just reasoning aloud here. That looks to be another similarity with green threads and async. Async also requires a big fat runtime that isn't part of the language. Instead you have to pick an async colour - such as tokio. That's pretty much what happens with green threads now. You have to pick a runtime such as the "may" crate, which forces the programmer to choose yet another colour.

So many colours, yet they are all just event loops underneath. Colours cause fragmentation, fragmentation is the mortal enemy of reuse.

It seems like the very least the language could do is provide a set of trait's for event driven I/O that mirror the existing I/O library in std. Then the library writers wouldn't have to colour their code by using a particular event loop implementation. I suspect it's easy enough for green threads or async, but accommodating both styles of event loop would be hard.


Sort of! I think this flexibility (the coat of many colors) is why Rust didn't implement the whole runtime, just Futures, and those futures are as minimal as they can possibly be. The smallest possible executor for futures is really quite tiny, with no need for the complexity of tokio's work-stealing, multi-threaded scheduler and all that. So it leaves a lot of room for minimal executors like https://docs.rs/smol/latest/smol/, or for you to write your own, or even to build some kind of FFI to send futures across to C or C++

I do completely agree with you that it would be great for std to pull in more traits from `futures` and elsewhere to allow it to be easier to write code against different runtimes! I think part of the intent was to get Futures out, see how they get used, and then to go from there. Hopefully we're getting into the "go from there" stage now, which is some of what this article gets into


> That translates to every I/O call has at some point chose the blocking or non-blocking implementation.

Sometimes I feel like the only way to get really good async I/O is to rewrite the whole damn thing to use io_uring. Avoid the blocking syscalls as a whole.

("Rewrite" because this changes e.g. read buffer management. The fundamental APIs will have to change, to enable the performance gains.)


In the words of Steve Jobs, “you’re holding it wrong”.

I’m guessing your code is blocking. You absolutely cannot do blocking code in async in any language, unless the blocking is super quick. No blocking io.

async code should yield great performance if you are doing everything the right way. Yes it is single threaded but either run multiple threads in rust or run multiple instances of your program with systemd.

That applies same to rust, Python, javascript, any async language.

I wrote a prototype message queue in Rust with Actix and got 7 million messages per second via http.


Java, C#, probably Go and others, are able to do async I/O on multiple threads. It doesn't help with the actual async I/O being performed, but it does help with parallelising CPU work, or with the fact that not all I/O can be async and the runtime is faking it, such as local file I/O, which is going to be blocking system threads.

Async I/O is all about multiplexing work on few CPU cores efficiently, and multi-threading is still required. Python or JavaScript are seriously limiting environments and should not be given as an example when we're talking about a language that does 1:1 scheduling.


Tokio's default scheduler is multi-threaded, and Rust does this very well. You just have to be explicit when you want to run some non-async blocking code in a way that won't block a thread.


I don’t really get your point.

You cannot, (should not), do blocking IO in async in any language.

The language provides libraries to do non blocking IO.

What is unclear about that?

Multi threading has nothing to do with async, except as a way to run certain things in an executor thread to avoid blocking.

Also I don’t really know what you mean by async IO…. even in the languages you reference I would guess IO operations are all run in a single thread.


> You cannot, (should not), do blocking IO in async in any language.

On platforms with 1:1 scheduling, of course you can. Blocking I/O executed on another thread, with a callback to execute when done, becomes async I/O (from the user's PoV).

Ofc, when we talk about async I/O, we also refer to the kernel APIs being used, such as select/poll/epoll/io_uring. Say, working with Epoll is usually done via a single-threaded "event loop", but that only listens for the operations possible for an open file/socket. The read/write operations are still potentially blocking, so for efficiency you need multiple threads. The dirty secret is that async I/O, as implemented by Linux, isn't actually fully async.


That's...not...how threads or async work...?

> Blocking I/O executed on another thread, with a callback to execute when done, becomes async I/O (from the user's PoV).

That's not what we're talking about when we discuss languages with async I/O, though. That's just bog-standard synchronous I/O with multithreading.

> The read/write operations are still potentially blocking, so for efficiency you need multiple threads.

That doesn't actually follow. The entire point of language-level async I/O is to be able to continue doing other work while waiting for the kernel to finish an I/O operation, without spawning a new OS thread just for this purpose.


Right, I think async is bad. We have threads, and operating system level isolation with processes. For the most part, these primitives work fine.

My code wasn't blocking, but it did a very small amount of computation, locked on what was typically a highly contended (async) lock, and then dispatched a bunch of blocking I/O operations. Each of these turned into a context switch under the hood.

There were parts of my code that were computer intensive, but determining that without being able to use my normal tools was a pain, and I had to come up with heuristics of whether or not to dispatch with spawn_blocking, since the call had significant overhead.

An abstraction that was meant to simplify program resulted in me spending a lot more time staring at it.

I think one difference with Python is that the async implementation doesn't try to hide or abstract I/O away from you. If you're doing I/O, you know you're going to block, and thus you're forced to acknowledge it by spawning it on another thread.


That doesn't sound right. In which way was your application slow? Conceptually, async is not necessarily giving you parallelism but concurrency. Tokio (multithreaded) or async-std spawn N threads though and schedule your work for you.

It would be interesting to see the code or know which libraries you used (or even just what type of application you were building).

I built a heavy app using async-std and then rewrote it to threads just to better control what was each thread doing. Without my additional scheduling rules (which brought a lot of other benefits and helped performance in other ways), the performance between async and not async was close, with threads being marginally faster.


Why didn't you use the multithreaded executor from Tokio?


Tokio does sync I/O in the background with dedicated I/O threads if memory serves. (it is sync in the sense it does syscalls, which are pushed to a dedicated thread).


Even then I'm pretty sure it's not _a_ dedicated thread, but a thread pool that defaults to a maximum of 512 blocking threads. There's no reason it would bottleneck unrelated writes on unrelated Files through the same thread. I tried to dig into the code here https://news.ycombinator.com/item?id=38180860


Tokio has a thread pool for blocking operations. It's not automatic, though, you have to explicitly use it.

I don't understand why some people think async is difficult to write. It's difficult if language doesn't have good support for it, but rust does. I remember async ruby was pretty weird to write initially.


Only for file access, which wasn't supported by async on Linux until io-uring.


No, any spawn_blocking will run on its own dedicated thread


I missread the comment above and read `async` instead of `sync`, my bad.

(Your response isn't entirely accurate though: it's using a thread pool whose size is configurable, not one thread per spaw_blocking)


Depends on the type of I/O (network, file, etc), and the async abstractions that the target operating system supplies.


IO shouldn't be going to one thread, as far as I know. Blocking IO would go to a threadpool.

But you can just `block_on` your futures if you want and not think about it at all.


+1


async took off when multicore processors came out and C had no first class way of running in parallel, so we bolted on threading libraries that are second class. Take a look at zig’s approach to concurrency, it’s so first class you can write your own event loop without an OS, i.e you could use the language’s async to write an OS.


Async doesn’t really have much to do with multicore. Indeed the most used async environment on the planet (JavaScript) is (used to be) strictly single-threaded. Async is all about keeping the CPU busy even though the stuff it does involves latencies thousands or millions of times longer than CPU timescales – and about abstractions that allow you to pretend you’re writing normal synchronous code when the reality is anything but. This mostly just involves chopping up the code and turning it into a state machine.

Async is almost useless when you’re not incredibly I/O bound. But many people these days are because the web ate the world.

C did and does have a poor almost-anything story but it doesn’t and didn’t really matter because C hasn’t been relevant on the web since 1995 or so.


> async doesn’t have much to do with mumticore

It does, once you have async you can multiplex n coroutines on m cpu cores, meaning you can throw libthread out of the window.

> C hasn’t been relevant on the web since 1995

Do you know nginx, a core web technology is written in C? Linux is also C. Don’t forget that at the end ov every web request are syscalls and hardware.


> Async is almost useless when you’re not incredibly I/O bound.

I'm not sure that's true. The async/await model is about representing a state machine in imperative code. Not all state machines can be written imperatively, but when they can be, it is often clearer than writing the state machine manually. I/O is the most common scenario for such state machines, but I can see a few other OS scenarios where you might want to use async/await instead (e.g., process management is probably better represented with async/await).


Could you be more specific about your bottlenecks? Were you not able use the multithreaded scheduler for some reason, or did it not work properly?


Cloudflare pipes a large fraction of the Web through tokio. The runtime has a very low overhead, and can scale to over a hundred cores.


Rust Async has very real constraints but it's definitely _not_ supposed to make your app slow. Most will agree that it is often best avoided unless demonstrated to be required by performance requirements. IMO this advice also applies to other lang's async, but even moreso in Rust. The complexity is real.


I think std needs a default runtime, and we might as well make it tokio, but maybe make the single-threaded executor the default instead and tweak the API where appropriate to align with this change.

Swapping the executors out should absolutely be a feature, and the traits should be portable, but a way to start fixing the situation beyond the great suggestions in this proposal is to acknowledge that std and no-std users are different, and std users are often developing applications and prefer sane defaults.


> I think std needs a default runtime, and we might as well make it tokio

Regardless of the quality of this idea, it isn't going to happen: neither the Rust Project nor Tokio (in my understanding, to be clear I am not involved in either) want this to happen.

> Swapping the executors out should absolutely be a feature, and the traits should be portable

I am not fully aware of all of the details here but there are significant problems when it comes to actually getting this done; I don't think anyone is ideologically opposed, but there's a lot of practical considerations that make this difficult, in my understanding.


To pick out one example of what makes it challenging: not all executors place the same type system constraints on tasks they execute. Tokio, for example, is a work-stealing executor which requires tasks to implement the `Send` trait so they can be sent between threads, while other executors may never move tasks between threads and therefore don't require the `Send` bound.


Yes, this is a major difference. But at this point, let's be honest - why has most stuff converged on Tokio? IMO it's not for reasons wholly of merit - it additionally won a popularity contest as one of the first movers. Why the popularity contest? Because people want an async executor, but they just want it to work. These people largely don't care about the benefits between different executors.

I think they'd like the choice to use a different one, but they'd rather just having something available with async traits that trend toward opinionated. The executor issue is a huge problem, because the ideology of "zero opinion" on executor coupled with "ease of adoption" are completely at odds. I don't think the current trajectory will ever resolve nicely.


There’s definitely a network effect that makes everyone converge on tokio.

However, I don’t think the situation can be improved by putting an executor in std. That will even more strongly make everyone stick to the standard one.

The problem isn’t that it’s hard to pick an executor (you can pick tokio without thinking). The problem is that when someone has a legit reason to use a different executor, it’s hard to avoid dependencies using tokio, and it would be even harder to avoid dependencies using a built-in executor.


We experimented with both tokio and async-std and picked tokio on the merits. This was ~2 years ago, so things may have changed a bit, but tokio provided a lot more out-of-the-box, and we also wanted a multi-threaded executor so its defaults worked well for us.


I actually think we have a cultural problem due to the fact that many of the people coming into Rust recently are coming from a "full-stack" webdev background, where "frameworks" are huge, and feels like nobody worries about being coupled to one framework or another, just picking the right one. Which is in large part due to the history of JavaScript and so on. This is an influx of people who sense that tokio is the "right" way to do things -- tutorials, blog posts, example applications, and 3rd party libs all start with tokio... and they're not philosophically against being tied to it.

Whereas my own sense of the Rust philosophy is supposed to be one of zero-cost abstractions (when possible), and for the language to provide nuts and bolts that I can assemble myself. My interest is in systems eng. I don't want to be tied specifically into the systems-eng choices that Tokio happens to have already made, even if they might be good ones. I want the ability to choose. It's not healthy, in my opinion, for one entity to dominate choices like this.

If this can't be resolved, my sense is that people who have the same impulses as me will choose to either not use async at all, or move on from Rust.


I agree with you 100%, but members of the Rust project have stated in the past that async, in not insignificant ways, is a play to deliberately target the higher level web/service based crowd. So, the cultural issue is partially self-inflicted.

I'm not even sure what I want in Rust. All I know is that people have a legitimate gripe being sold on Rust async for high level work, having it ergonomically fall much flatter than it should. It's very hard to satisfy everyone, which they are learning day by day.

I do wish for the async story to mature a bit more so the implementors have a chance to show the community how good it can be. But the project needs to consider its messaging to the users it has courted over.


> the higher level web/service based crowd.

Not everyone that writes network services is "higher level" or "web".


That is entirely fair. But back to the topic -- some of us building network services etc would like options, and not to be tied to a specific async runtime. E.g. I was looking at what it would take to move my code over to monoio. Or anything else. Not going to happen, because too many 3rd party deps mandate tokio.

That's not a good situ. I'd as soon rip out async, and optimize the concurrent multithreaded situation myself than be tied down that way.

FWIW my day job is on embedded Linux, small systems that sit in tractors. We don't do async Rust. It's all actor-style communicating components. And it works pretty much fine.


I hear you, options would be nice for sure. We'll see if making things more generic is in the cards or not, I guess.


I believe that without.boats (or somebody of similar prominence) proposed that pollster (https://docs.rs/pollster/latest/pollster/) or a similar async executor be pulled into std.

I think it's a great idea. By blessing something that's not tokio you give a good incentive for library authors to test against more than one executor. And by being so far from fully featured pollster is never going to "win" so blessing it doesn't appoint a winner. And it gives an obvious solution for what people who don't want to use async in their code but do want to pull in an async library should do.

edit: it was without.boats in this post: https://without.boats/blog/why-async-rust/



I would not think it would be that good. Having multiple runtimes is great for specific use cases. I would also fear that having "one standard endorsed runtime" would lead to either the death of the alternative ones, or to the freeze of it once it's in the stdlib and fall to oblivion (like many packages in the python standard library).

What the stdlib actually needs is the proper set of traits/facades/whatever to interact with the current runtime. Just like they did with the Future trait. And add the handful of traits that go with them that tokio, futures, smol, etc have: AsyncRead, AsyncWrite, Stream, Sink, et al


There's two sets of missing things here.

The first is interfaces to represent async versions of existing core sync traits (that's the AsyncRead/AsyncWrite/Stream/Sink/etc. you refer to). What makes this somewhat awkward is you're providing these traits without an implementation of them in the standard lib.

The other thing that's missing is something like GlobalAlloc. You need a generic executor runtime interface to handle some of the executor things you need like "schedule new task".

In general, I think there's a class of features where you need to have just one global (really, process-wide, not just library-wide) provider of some service, and it may be worth having language features to provide this functionality. Memory allocation is one specific area; async executors is another topical area. But you can also throw in stuff like signal handlers or logging or service providers or error handling or tracing features, etc.


Agree on both points. While I don't find async rust hard to use as a library consumer, trying to write a runtime-agnostic library is (IMO) more difficult than it needs to be. I would like to see most of `futures` either be incorporated into the stdlib or used more consistently throughout the various runtimes.


I think we'd need, at least, runtime agnostic locks, channels, and a 'spawn' function (trickier). And ideally some I/O primitives somehow not tied to the runtime?


In reality there aren't multiple runtimes in a real, pluggable, generic sense. There's tokio and then some niche things that are hard to build on top of because they're not tokio (despite being possibly technically superior in some way).

Every async-focused dependency that is relevant effectively mandates Tokio. And I've been ridiculed in public and private forums for suggesting that there should be a way to write libraries generically so that they don't have a runtime dependency. (Right now, services as basic as locking and task spawning are coupled to tokio, at least, and people use them all over.). Major new things are being made, all coupled to tokio.

And I think this is a shitty situation for a whole bunch of reasons. And others (looks like you, too) agree, but it feels like there's just no way this is going to get fixed.

Like, this article here, it doesn't even mention this as a concern? In the context of a big picture discussion of async over the next 4 years.

So I don't foresee any progress on this front, and it makes me want to just rip async out of my code entirely.


What makes Rust great is that its design drew upon decades of understanding of programming language theory and practice, and the designers took bold decisions based on that understanding to avoid the mistakes of other programming languages.

The problem with async Rust is that the async idiom is new, and its interactions with the rest of the software ecosystem not particularly well understood. This makes async support glaringly different from the rest of the language.

I'm glad the designers seem to be taking a step back and reconsidering how everything fits together.


Async isn’t new in programming language theory. It’s a syntax sugar for state machines and continuations. I think it could be argued that PLT was already way ahead of async/await – monads are more general than futures, and Rust’s async wasn’t generalized to be an effect system.

Also it’s simply not true that async wasn’t fully understood or carefully evaluated. It took years to design, and then bikeshed every detail, to the point people involved were burned out. It had multiple prototypes, and an early callback-based implementation used by hundreds of libraries, in production, for over a year. It’s probably the most thoroughly designed and tested feature in Rust’s history.


> I'm glad the designers seem to be taking a step back and reconsidering how everything fits together.

This is simply not what is happening here. The post is clear that this is about filling out and finishing up the plans that were laid down when async was initially designed, not changing how things fit together.


Adding Move and deprecating pin/unpin would be a huge ergonomic improvement I think. I also think the Rust project should consider “out-of-band” editions if they can deliver big features like that sooner rather than waiting until 2027.


Note that in general, the Rust project has chosen a time-based, not feature-based, release model. New Rust versions come out every 6 weeks, and new Rust editions have a consistent cadence of every 3 years. There are advantages and disadvantages to debate between time-based and feature-based release models, but Rust pretty clearly has chosen their preference.


Then they'll just add a new #[from future import Move] syntax.


The important aspect of the 6-week releases is that they’re frequent, so nobody feels a need to rush a feature to cram into the next big release.

Editions happening every 3 years unfortunately undo this, and there is a pressure to land changes before the upcoming edition.


I agree that Pin is super confusing, and a Move trait would be much better. I'm willing to wait though :)


I kindah lost track what is the current status of async in traits ? Still relies on the macro?


The feature is shipping before the end of the year.


I thought async and await were kind of logical builds on top of generators but this is saying rust has no generators.

That’s how it works in javascript and as a commenter mentions in Python too, right?


They are in some languages, but Rust instead implemented them with a pollable Future trait. The async keyword, then, causes the block to desugar into a new type that implements Future. In order to emulate a coroutine, the generated poll function includes a state machine that tracks the await point where the function was suspended.


They're intertwined, but generators have yet to stabilize https://doc.rust-lang.org/beta/unstable-book/language-featur...


It's also how it works in Rust. Stable Rust isn't allowed to use generators, but the code that desugars `async fn` is, because it's part of the compiler.


That's true for implementations like python async, but rust operates a bit differently (mostly to avoid allocations across await points). The author of this article has some other very good posts about it.


They are logical builds on top of coroutines, as are generators. There is a generator initiative in the works, but it does seem odd that async exists and generators don’t given that the “pausable state machine” nature of them is so similar.


Hey boats, I found this sentence confusing to read (emphasis mine); an editing mishap perhaps?

> On the other hand, there was some speculation about making “await patterns” that destructure futures and then somehow making that work here; I think this would imprudent and leaving await as an expression, and for await as a special expression for handling AsyncIterator, is the most sensible choice.


Tangential to some comments here: Why does the standard library not have block_on?


As far as I know, the proposal to add one was determined to need an RFC: https://github.com/rust-lang/rust/pull/65875


Another instance of a trivial and obvious feature that can only really be designed one way that is not implemented 4 years after initial proposal.


I am not fully sure I agree with the first part of your assessment, but I share the frustration that a feature like this, which would be useful and is not particularly large, is basically ignored by the working group, while larger, more controversial, and less useful features, like keyword generics, are pursued while the more useful things languish.


As the person who proposed this feature - among lots of other ones for async Rust, like run-to-completion async functions (https://github.com/Matthias247/rfcs/blob/e7fd7042f8069e9126e...), structured concurrency, cancellation tokens, etc) - I can unfortunately share this sentiment. It seemed really hard to land anything in async Rust since the priorities don't seem overly clear. Therefore I put a pause on all contribution attempts and went back to just using Rust as a user.


block_on is a executor that polls a future until it returns ready. If the future is not ready then the executor wait until it is woken up. If you poll a future that relies on a specific runtime (I.g. A async library that uses Tokio) to wake up the executor, with a simple block_on you will get stuck.

A generic block_on in std would be a footgun for new programmers I believe.


The proposals to add block_on generally include an executor like pollster.


Has there been any progress with the effects system WG? That seems quite relevant to the proposed "async gen" syntax.


Why yield is not a suffix operator like await?


`.await` being postfix makes it easy to chain with further method calls (eg `foo().await.bar().await`). `yield expr` returns `()`, so there is no use in chaining it.


Thanks for the explanation. I was thinking of Python's yield were it evaluates to what is .send() back in the generator.


Async is a huge wart and I try and avoid it wherever possible. It's simply not useful in a lot of cases that I encounter. However, if a library uses async, you have little choice but to make your whole project async. This adds to its horrible reputation.


This is consistent with the author's assessment - the incomplete implementation of async makes it very challenging to advocate for, despite the fact that it was the only logical choice given Rust's design goals.

I will say that the notion that you must make your whole project async is largely true (except that you can block on futures with all runtimes), but this is more symptomatic of the fact that Rust has hitched its horse to two wagons that don't have clean overlap. It is effectively two languages trying to converge in the middle.

It's equally desired at "web server and above" and "operating system and below", which probably makes it one of the hardest languages on the planet to design (lets not forget it deliberately has no runtime, which makes things even more difficult). Whether or not this is a good idea remains to be seen over time, but they are in largely uncharted territory so I'm prepared to give them some slack on it while they figure it out.

That said, holy has it taken such a long time to round out the async story to make it feel better. I don't blame users not wanting to wait it out because it has felt like an eternity.


> It's equally desired at "web server and above" and "operating system and below", which probably makes it one of the hardest languages on the planet to design.

My theory as to why C++ has become as a problematic language as it has is because it is able to do it all. And "all" doesn't fit nicely into a single package or paradigm.

I don't like async (in any language), so your post immediately piqued my interest. I'm not that interested in 'webserver and above' so of course that would be where people find async useful.

It is possible that there is no way to unify these two domains into a single language (at least without becoming c++). Although, then again, it might be that if you think about it long enough then the unification mechanism becomes apparent.


The problem with C++ is that every feature

- attempts to solve too many problems,

- has significant papercuts that have to be considered,

- has significant runtime cost,

- interacts poorly with other features, or at least leaves significant edge cases open,

- has non-uniform compiler support (although in OSS land only GCC and LLVM matter), and

- creates arcane error messages that are anything but fun to analyze. Especially when templates are involved.

Above everything the fundamental problem that the unsafe parts of C are everywhere, and to master C++ you have to become really proficient at diagnosing problems with them because you will run into them. Only then can you start thinking about fun things like software architecture. Always use a linter/static analyzer.


Don't forget that C++ features are also often rolled out before anyone even really understands them, and then after they are rolled out people find all kinds of footguns and issues with them.

The C++ committee basically evaluates features almost entirely in the abstract, writing papers and discussing privately among a small group of people about things instead of working on proof of concept implementations, getting actual real world feedback on it, or having something along the lines of Rust's nightly where experimental features can be tried out.


> The C++ committee basically evaluates features almost entirely in the abstract, writing papers and discussing privately among a small group of people about things instead of working on proof of concept implementations, getting actual real world feedback on it, or having something along the lines of Rust's nightly where experimental features can be tried out.

Sitting on the C++ committee, we actually demand a lot of proof-of-concept implementation of proposals before they'll be accepted.


That list of bullet points (except the compiler one) is also applicable to the whole async ecosystem in Rust :)


Agree on papercuts and interaction with other features to some degree. Absolutely disagree on runtime cost. The design of async has as minimal a runtime cost as seems theoretically possible, and avoids a lot of the runtime cost of async in other languages. It's really an impressive engineering feat given the constraints and I think there's a real case for it being a "zero-cost" abstraction (i.e. you couldn't implement a faster version by hand).

IME the bad compiler messages tend to come more often from the `async-trait` macro than from anything fundamental to asyc itself. Hoping the stabilization of async-trait helps with that. Backtraces with async are definitely a bit nasty though.


I for sure hope that Rust gets at least the safety aspect right :) I could endure a lot of suffering if I knew that concern to be taken care of.


Agreed. The classic claim about C++ is that everyone wants to remove half of its features, but no one can agree on which half.


C++ is problematic in that it's a memory-unsafe language with a lot of cruft dating back to the 1980s. Its generality has made it more not less successful, since that makes it a lot easier to reuse library and support code across domains.


I think async is still in the more just trendy camp than being a proven asset. After all, thread per connection is perfectly viable for the vast majority of servers. Most people aren't re-writing NGINX, after all (hopefully...).

It's a shame Rust let itself be distracted by it instead of focusing on refining its strengths and developer experience


> Most people aren't re-writing NGINX, after all (hopefully...).

those people probably may use another language (java, go, c#) if absolute performance is not critical for them.


That's a false equivalence. I can have a performance limited server that is not limited on concurrent connection count in which case thread-per-connect is both simpler and scales great compared to async. async doesn't optimize performance in general rather it optimizes for number of open concurrent connections specifically. Which primarily only helps if you aren't doing much of anything in terms of compute per connection.


> However, if a library uses async, you have little choice but to make your whole project async.

This isn't true. I'm writing an application right now that is mostly sync, but has a small amount of async code in it. It's easy to spin up a tokio runtime which can run inline on the current thread. Then use it to evaluate a Future.

    tokio::runtime::Builder::new_current_thread().enable_all().build().unwrap().block_on(async {
        println!("I'm async!");
    });


Because this code block looks quite complex, I want to add that it can also be just

    smol::block_on(async {
        println!("I'm async!");
    });
(I thought tokio had a helper like this too but could only find `tokio::runtime::Runtime::new().unwrap().block_on(async { println!("I'm async!"); });`.)


Needing a helper library for something as simple as async so you don't go mental is really not good enough. I see the same thing with error handling - every Rust project I see imports a helper because it's too clunky otherwise.


If you don't want to pull in a helper library to run async code in a sync context, then why pull in an async library at all?

Rust is not a batteries-included language like Python. There are lots of libraries that are very commonly used in most projects (serde, thiserror, and itertools are in almost all of mine), but this is a conscious choice. They say in Python that the stdlib is where projects go to die. I'd rather have the flexiblity of choosing my dependencies, even for stuff I have to use in every project.


The problem is that a large number of popular libraries has converted to async, 95+% of them to Tokio.

So you are stuck with smaller, less battle tested products if you'd rather not pull in 100+ crates of dependencies that are doing nothing but inflating the build times and file sizes (for your particular usecase).

Example: reqwest vs ureq


OK, but like, can we just be honest then that the problem here is that your build times go up? People act like it's an insurmountable problem rather than just a trivial trade-off where, yes, your build times will go up because of some extra dependencies on an async runtime.

Increased build times are not great but holy shit the way people talk you'd never know that that's the actual trade-off here, an extra 3 seconds on a clean build.


Adding dependencies is not easy in some organizations. You have to trust them, in addition to waiting 3 seconds to compile them.


OK, in some niche scenarios I can see the cost being larger, but I think this is totally overstated.


Usually you're reaching for block_on because a library you want to use is async. Almost certainly the library you're using will have already be depending on an async library, so by pulling it in yourself you're not adding additional dependencies.


Then use the code I posted if you don't want a helper library. Or just wrap it in your own function if it's too complex for your tastes.


Every day we stray further from god.


This is the kind of thing you write once to abstract out async things in a function and call it a day. It really isn't that bad. Besides you can just use:

    smol::block_on(async {
        println!("I'm async!");
    });
if smol is an option.


The complexity of this code snippet almost seems satirical (though I do get the point you're making, and agree with you).


On one hand, I agree that on the surface, this looks complex, if you don't read it.

But on the other hand, just read the code. It's not complex.

You drill down into a tokio namespace. You make a builder object. You unwrap it. This is idiomatic Rust. It's verbose, but explicit is better than vague. There's no conditional logic. There's no weird type-fu. No macros. There's not even any parameters to supply, other than the Future to block on.

It's trivial to write a wrapper to go from chained methods a helper call.


The next time a Rust fan pokes fun about Java's verbosity, I am copy-pasting that.


Just noting for other readers that, while killercup posted another option using `smol`, this seems to me to be in line with Rust's philosophy of explicitness, which is something I really appreciate about the language:

- create a builder

- run on the current thread only

- enable all drivers

- create the instance

- ignore errors

- then call some blocking async code

Would look nicer if split out onto multiple lines I imagine:

    use tokio::runtime::Builder;

    let runtime = Builder::new_current_thread()
        .enable_all()
        .build()?;

    runtime.block_on(async { println!("I'm async!"); });


I will forever argue that things like goroutines and the Java Loom Project are proving that it is long term more successful than making an explicit async keyword / operation that has its own semantics when invoked, because it won't dictate your code and cause the color function problem to exist.

I'd love to see this in Rust, and I know that it initially had something like this, perhaps its not a bad idea for someone to take a second look at this now that more time has passed.

EDIT: to be clear, that is not me saying we need goroutines or we need a Java Loom equivalent in Rust, simply, the DX of these two examples are far superior to the DX of async Rust today, in my opinion


This was the subject of my previous post: https://without.boats/blog/why-async-rust/


How can you argue that? Loom is extremely early, it's not proving anything.

And Java/Goroutines don't have a keyword because they do things implicitly, they have heavy preemptive runtimes.


Thats red herring IMO. Sure, they're GC'd languages, but that does not preclude the fact that both of these approaches from a DX perspective are easier to work with, and that matters in language design.

I didn't say we need goroutines or we need Loom style asynchronous primitives I am however pointing out, that they did a really good job of making async approachable and feel like you're writing the same code as you would in a traditional synchronous model. Thats the real win.

Its not an easy problem to solve, to be absolutely clear, but its a worthy goal, and if that means it adds marginal overhead initially I think its worth the tradeoff (esp. if you can do something similar to how you can use Rust in a `no-std` build, you could do one without a `async runtime` build, spitballing off the top of my head)

Another problem is that generally speaking, async is de facto used a 3rd party lib, and it really should be a first party primitive that everyone feels comfortable using.

To me, this is what matters


> I didn't say we need goroutines or we need Loom style asynchronous primitives I am however pointing out, that they did a really good job of making async approachable and feel like you're writing the same code as you would in a traditional synchronous model.

You created a dichotomy - languages with a native construct for async versus languages that provide this as libraries. But that dichotomy does not exist - both language have native concurrency support through their runtime. That was what I was pushing back against.

> async is de facto used a 3rd party lib, and it really should be a first party primitive that everyone feels comfortable using.

I think this really remains to be seen. Rust has always had a "just pull in a crate" mentality and a "be very conservative about what's in std" approach, and I think the community is overwhelmingly in favor of that. Any major stdlib changes should be taken pretty seriously.


I agree, it should be taken really seriously.

I think 4 years worth of a feature being out in the wild and in use is enough time to start having conversations about what went well and what didn't and how to address that. Its very clear to me, at least, that async in Rust is becoming more and more dependent on tokio. Most of the major async supporting crates support tokio and/or only leverage tokio. Just a cursory glance at the crates registry supports this much.

I'm not against a "pull in a crate" mentality mind you (though, careful what you wish for here, see: NPM / Node ecosystem) however, it is worth identifying when something is becoming / has become / is considered to be such a core feature of the ecosystem that it would benefit greatly from stdlib support, and I think this fits that definition based on the evidence I've seen, at least. Though I realize others may not share this sentiment, I think its a viewpoint that has evidentiary backing (see all the talk about async Rust in the communicate, issues etc surrounding it. Its already a pretty big buzz topic relative to other things surrounding the language)

All this is to say, maybe its time to seriously start thinking about what first party support will look if we bring in a first party async / non-blocking I/O platform into the stdlib


> careful what you wish for here, see: NPM / Node ecosystem)

Just to be clear, I think the NPM ecosystem is generally great and a massive success. People totally overblow the issues, and none of them are actually because of a small std library or due to the ease of install/publish.

> however, it is worth identifying when something is becoming / has become / is considered to be such a core feature of the ecosystem that it would benefit greatly from stdlib support,

I agree, and I think that there are a few places with regards to async that could work well here. Maybe some kind of Executor trait (hard, but maybe possible?), probably `std::block_on`.

> see all the talk about async Rust in the communicate,

FWIW I think the majority of people are just happily doing async work in Rust and don't get too involved in the discussions. I'm one of those people, except I'm also an internet addict on extended PTO so here I am.

> All this is to say, maybe its time to seriously start thinking about what first party support will look if we bring in a first party async / non-blocking I/O platform into the stdlib

I agree, I think some of this is best done in the language but some should be in std. No question.


> both of these approaches from a DX perspective are easier to work with, and that matters in language design.

While you are right that DX matters, DX is not the only thing that exists. Design is about balancing constraints and tradeoffs. Rust has other design commitments that preclude using this design, before you even get to DX questions.

(Furthermore not everyone agrees that these things are clearly better from a DX perspective, as DX is a subjective topic.)


I agree, there's questions that need to be answered, and I don't know that I even have answers to give to them.

I will say, that its clear, to me and a good portion of Rust users, that async in Rust needs DX improvements. Its one of the top features of the language that is used alot and people struggle with often[0]

[0]: To be fair, async in any language trips up developers pretty often, though I think Rust can give someone a particularly bad time. Granted, I have not compared it to C / C++ as I don't do any development in either language currently


> I don't know that I even have answers to give to them.

To be honest, this is why this discussion always gets frustrating: people demand change that is impossible, and then when pushed for how to accomplish the impossible, they throw up their hands. I do not think you or anyone else is doing it maliciously, but for some reason, on this specific topic, it happens endlessly.


My best shot at this is something akin to a runtime built-in to the stdlib that can be opted out, similar to how you can build something with `no-std` so there is no reliance on `libstd`. Perhaps this is a great place where you drop into more "raw" async programming using the async / await primitives. You lose DX sugar but slim out the runtime for embedded work.

Ideally, the runtime could handle anything written using lower level primitives as to not completely kneecap libraries that need to work with `no-async-runtime` (or whatever you want to call it).

This would at least alleviate a common concern I hear around this, which is runtime bloat.

That seems like a step in the right direction to integrate a unified async runtime with an alternative / better syntax[0]

[0]: I want to note, that C# is the only language I ever worked in that supported async in two constructs. There is the traditional async / await, which is by far the most common. Before that though, there was event driven async programming (with support for background workers and other async features) and that is also still supported, and they can interop with each other (with some caveats)


The only thing this would change is not needing to "cargo add smol" which already exists as a smaller runtime if you need it. That wouldn't solve the fact that a large part of the ecosystem would still need the features of Tokio, and would still use them. So you still have interoperability problems, until figuring out how to make runtimes swappable, and that is open-ended work. (and also assumes that smol's particular choices are correct for this, and that everyone maintaining the stdlib agrees that they are, and that the folks on the libs team are willing to step up and start maintaining an entire async runtime, etc etc etc)


Async handles thread contexts better. Maybe I'm out of the loop but implicit blocking patterns haven't taken over gui programming just yet.


Loom is very new, and Java on the desktop is almost dead. Even though virtual threads could be useful for GUI programming, I don't expect significant innovation in that area. But the upcoming Structured Concurrency[0] and Scoped Values[1] JEPs make things hopefully easier.

[0]: https://openjdk.org/jeps/462

[1]: https://openjdk.org/jeps/446


Structured concurrency looks good but is it really better than async syntax?


I have to admit that async syntax has a certain charm, as it has deep connections to iterators. (I have to program with Unity for my master thesis). But Structured Concurrency leverages well-known blocking syntax that doesn't require further explanation. It's not even specific to virtual threads.


Goroutines and the Java Loom are just fibers. They're like a super clunky mix of async-like and thread-like features, though sometimes there are reasons to use them. (For example, stackful fibers might enable cleaner interactions with OS facilities or outside library code.) Rust makes it real easy to just use threads if that's what you prefer.


This is a popular sentiment that I share to some degree, but I really wish we could just move on from this discussion. It happens in every post that is even just vaguely about Rust or async. It's like reading complaints about the GIL on a post that is loosely connected to Python! At some point it just gets boring.

"I will forever argue that [...]" -- Why are you forever arguing?

"I feel like I am alone with [...]" -- No, I read it every day on here.


to be honest, HN is one of the few venues where I think someone who is core to a project might actually see what is said about a language. I've tried getting involved in mailing lists and stuff, but they aren't often as welcoming to this discussion as one might assume.

My hope is that by raising these things on HN, someone will take notice in a different way and at least start considering / revisiting alternatives


All of this would be correct in an alternative reality where everybody in the Rust community, including every person ever involved in the language's evolvement isn't aware of these complaints.


Your comment is self defeating. The reason people are aware of these complaints is because they keep being brought up, over and over.

This is a serious wart on language design and while I can agree that it's likely too late to fix it for Rust, there is a kind of race among new languages to be a successor to C++ in many of the domains C++ is used in, and while I think Rust does hold a lead in that race, the race is not over.

A language that can provide an ergonomic solution to concurrency would absolutely provide a huge boost to any such language, and so to people in that space, listen to these complaints. Async/await is not a good solution to this problem.

You almost always hear people complaining about async/await in every language it's a part of but you rarely hear people complain about how Go manages concurrency.


What Rust libraries use async that you think should not? I can think of one, but otherwise every library I've encountered uses async because its intended for the kind of networking service that benefits from using non-blocking IO.


Just because it's a network request doesn't mean it benefits from using async.

EDIT: to add a specific example, postgres. I understand from sfackler's perspective why it makes sense to maintain just an async library and a sync wrapper around it. But from the user's perspective there'a absolutely no reason that all of tokio should be required to talk to their db.


As a user who has maintained network services that communicate with Postgres, I'm very glad that sfackler's postgres library doesn't perform blocking IO calls, as that would make it unfit for purpose for me.


Maybe you misunderstand me. I'm not saying there shouldn't be a non-blocking version. I'm saying async is not a silver bullet. Some programs/teams/environments have different constraints than you.

Would you also advocate for the kernel to deprecate blocking i/o?


Obviously, it'd be better for someone who wants to talk to postgres and doesn't need non-blocking IO if a version based on blocking IO existed. But what is your grand point? It's not some conspiracy that libraries for network services use async/await: they most often do it because their maintainers need to use async/await in the work that pays for them to provide this open source library free of charge to you.

And all you need to do to not deal with async is wrap it in one of the many different block_on implementations. Yes, that's less ideal than not having to pull in these dependencies; if avoiding those dependencies is mission critical for you, maybe yours could be the enterprise that pays for the blocking IO postgres library.


"async" and threads are not the only options for concurrent IO.

I've never saw anyone discuss using poll(2) style concurrency here which is interesting but also kind of sad.

With the mio crate for example, it uses the operating systems "select/poll" interface. On Linix this is epoll(7), but other OSes have their own interface.

"poll" style concurrency is a lot less complicated for me. The general idea is to register IO handles such as sockets into a data structure and pass it into the kernel. The kernel will emit events whenever a handle is ready to do something. There is the option to go to sleep and block until one of the handles are ready but it's not required.

"async" is an abstraction on top of this. The async runtime is handling the event loop and other stuff for you which is very useful but also a little bit complicated.

I am not in any way against "async" style concurrency but I think a lot of libraries could avoid pulling in a runtime and still be concurrent.


"mio" was the game in town for years and it was definitely not ideal at all, it was promising but very low level and most people didn't use it if they could help it. People wanted async/await.


Several of my rust programs use tokio's select! macro as their main loop. I really like that pattern.


Nothing is stopping anyone from writing DB drivers that don't require async. If there's as much of a market for it as threads like this suggest, it seems like it would be a fairly popular project.


Huh, thats rather odd argument. It is like saying no one is stopping anyone to write new low level, memory-safe programing language, that won't have Rust style async system. If there is real market demand it could fairly popular project.

DB drivers, http servers, runtimes and so many other complex components are preferences of high skilled developers and not some objective market choices. Once its developed most application developers have to use it however user unfriendly it maybe.


One thing about Rust is that it's a fairly young language. We've had to implement a number of things ourselves that we wouldn't have had to implement in other languages.

If there's not enough community interest for there to already exist a solution for your problem, you really only have three options: write it yourself (including opening a PR to add it to an existing implementation), pay someone to write it, or switch languages. If none of these are options, you're just out of luck, and complaining about it is unlikely to do any good. I'm not saying you can't complain of course, but I am saying that you're more likely to get what you want via another path.

But regardless, I don't think it's an odd argument. Indeed, no one is stopping anyone from writing a new, low-level, memory-safe programming language! That's why Rust exists! Turns out the market was huge! Writing a new one now would be easier because of all the great ideas that Rust helped to prove work at scale. We're seeing this already with the success of languages like Zig, which makes different tradeoffs than Rust did and seems to be finding success in slightly different niches.

And sort of disproving your point, we use postgres, and I can think of three different implementations of postgres drivers offhand (sqlx, tokio-postgres, and diesel). I'll also note that the author of tokio-postgres also publishes https://docs.rs/postgres/latest/postgres/, which is not async! It's impressive how many options we have for such a complex thing in such a young ecosystem.


postgres is a wrapper around block-on(tokio-postgres).


I had an absolutely awful experience when I tried using Sqlite through the sqlx crate. The work was mostly cpu-bound, but I didn't mind blocking the event loop for a few seconds here and there, so I thought I would just run it on Tokio's worker threads. Big mistake, I ended up getting extremely low CPU utilization due to a quirk (Or less charitably, a bug) in Tokio's scheduler.

I eventually rewrote the whole thing with rusqlite, but apart from being non-async, I found the API much less ergonomic to work with. But at least Rayon worked exactly as I expected.


Example from the recent past: I wanted a quick Rust code to stream s3 object, uncompress with zstd and untar the content to a directory. aws-sdk-s3 supports only async (tokio), tar crate only blocking, async-tar only async-std, async-compression only tokio. Hard to paper over it with `block_on` because of the streaming part. I don't remember what I actually did, but it doesn't matter - having to come up with a magic working combination is exactly the pain.

I just wish everything would at least still support blocking IO as a common denominator, so there's at least a baseline that is known to be possible. 99% of percent of the time I do not need benefits of async, because I write small software for relatively small, but still real use-cases. And using Rust ecosystem was easier just a few years ago than it is now for these use-cases, as blocking Rust was not yet relegated to be a second class citizen, replaced by an immature and fractured async.


> And using Rust ecosystem was easier just a few years ago than it is now for these use-cases

I used to write Rust web services (professionally) before async and it was definitely not easier for me. It was way harder. I'd end up with accidental hanging because some socket didn't have a timeout set on it properly, it was extremely leaky (why the hell am I talking to raw socket APIs just so that I can read from S3?), and it sucked compared to async. We've had radically different experiences, somehow.


> why the hell am I talking to raw socket APIs just so that I can read from S3

I don't know. https://doc.rust-lang.org/std/net/struct.TcpStream.html#meth... etc. were there for a while. Though I think that deadline-based timeouts would be way better to have. https://www.reddit.com/r/rust/comments/8b5krv/the_case_for_d... . Probably could be built around existing primitives. These things were never built/popularized because the community just jumped on async like some silver bullet.

I agree that building heavy duty networked services got easier with async, but at an expense of fracturing the ecosystem and dragging everything else into an MVP feature, which made things worse (at least in some respect) for other things. I personally don't do much web servers, and when I do I can just spawn lots of threads, terminate TLS with nginx anyway.

Again, I don't mind async on its own. I think it's great for what it is, and is useful when it's useful. But for decades tons of the web was built in Java, Python, RoR without async IO and it worked just fine. And I didn't have to play IO-type sudoku, and could expect that the most basic, native blocking IO is well supported, and not just an afterthought/wrapper.


> the kind of ... service that benefits from using non-blocking IO.

Async, in the sense from the article, and non-blocking aren't synonyms. Not using Async doesn't imply blocking.


What Rust libraries use non-blocking IO without using async syntax? I don't know of any. This interpretation of the request is new to me: I've exclusively heard complaints about libraries using async from people who want to use blocking IO.

(Also, in case it isn't obvious: I wrote the article in question.)


I usually do non-blocking ops with hardware interrupts, DMA, distributed computing (over CAN etc), and multiple cores. For GPOS/Desktop PCs, threads, SIMD (GPU or CPU etc) are effective.

More generally than the concurrency operations I described, are any sort of event-loop or state machine, of which Async is one example.


The entire unix operating system is designed so that you can write sequential code. It abstracts concurrency away. It’s incredible.


The point of async is to move concurrency from the OS into the process.

The specific issue is the context switch. The compiler, with async is able to be much more performant than the context switch that the OS provides.

Depending on the kind of application you're writing, this is either splitting hairs or very, very important. Applications that handle many (hundreds, 1000s,) of concurrent IO will have a noticeable performance improvement using async versus the OS's context switching.

But, there's a more important thing to consider: "async" in a programming language communicates that a method can block. It allows the caller to start the call and do something while it's waiting for the result, without needing to get into the weeds of threading. Purely relying on the OS for context switching means that it's hard to know what methods block.


Just a note that it seems many people think the main problem isn't the context switch. Rather, Linux by default allocates a very large stack to every thread, and thus many threads lead to high memory usage. Async makes this better.

See e.g. this recent clip from one of the engineers on Project Loom in which they argue that the context switch is relatively low overhead: https://youtu.be/07V08SB1l8c?si=i0v9w90Kb_M0I4gP&t=966


The stack memory won’t actually be physically allocated upfront - like all user space memory it is virtual.


At least on Windows I've reconfigured the stack space. (I had to run an experiment that required a lot of threads.)

Is this something that's hard to do on Linux?


It's the same, but this issue gets to one of the hearts here:

Both of these APIs are set by you, the user. You can choose how big to set your stack size, it's true. However, what value do you actually set? Too high, and you're still using too much memory, though admittedly less. Too low, and you either need to accept death by stack overflow, or runtime detect this case and fix things up.

With async/await in Rust, the compiler can statically see how large the stack size needs to be. Each "thread" will have a perfectly sized stack. No user intervention required, no fiddling with settings.


> Both of these APIs are set by you, the user. You can choose how big to set your stack size, it's true. However, what value do you actually set? Too high, and you're still using too much memory, though admittedly less. Too low, and you either need to accept death by stack overflow, or runtime detect this case and fix things up.

This is a solved problem - it's "just" the longest path in a call graph where each node is weighted by its function's stack frame size.

> With async/await in Rust, the compiler can statically see how large the stack size needs to be. Each "thread" will have a perfectly sized stack. No user intervention required, no fiddling with settings.

There is nothing stopping a compiler from doing the exact same analysis for a sync function at compile time (barring exceptions like varargs or deliberate recursion). They just... haven't, for some reason. It's a shame that it took until Zig for it to be addressed.


I agree with most of this.

> without needing to get into the weeds of threading

A thread is the exact concept needed to describe that and preserve “if else then” sequential code.

> 1000s of threads

Linux handles thousands of threads just fine. If you’re using a scripting language like python, context switching is the least of your performance concerns.


> A thread is the exact concept needed to describe that and preserve “if else then” sequential code.

I suggest looking at C# and Javascript that implement async very, very well.

The difference is that with conventional threading semantics, when you join a thread, it doesn't return a result. You still need to write some kind of "thing" to get your result from the subthread to whatever's waiting on it. (C# also provides a less-well-known BeginInvoke mechanism which is somewhat cleaner than join.)

In contrast, the promise (Javascript) or task (C#) has a result. Instead of joining a thread, the await keyword gets the result, just like calling a method.

> Linux handles thousands of threads just fine.

Yes... And no...

It doesn't matter what OS you're on, each thread needs its own allocated stack space and has the overhead of context switching. "async" optimizes that by putting data that would normally go into many different stack spaces into the heap and jumping around among concurrent operations without the context switch.

Again, depending on what you're doing, that's either splitting hairs, or really making a tangible improvement. But you can't argue that more allocated stacks, and more context switches, is faster than doing it in process. At that point you're arguing with fact.


> C#/JavaScript

I am familiar with how these are implemented. My opinion is the same. Yes you can implement slightly lighter weight threads in a language itself.

> It doesn't matter what OS you're on, each thread needs its own allocated stack space and has the overhead of context switching

This is a quantitative question. I know what it does, the question is how much slower it is than whatever async construct you want to use.


Too late to edit: I should also point out that Rust has the same issue with joining a thread: It doesn't give you the result of the function; unlike awaiting on a promise.

(I really struggled with async rust, so I'll admit that I don't remember the name of the type that represents the promise.)


I'm with you. I feel like an outcast in the Rust OSS embedded circles when I bring this up. They are heavily into async/embassy.


I'm curious why you don't like it for embedded - embassy has been an absolute delight for me thus far, and my primary complaint about it is just that it doesn't have the breadth of hardware support (yet?). It's been the thing that has redeemed Rust async for me, as otherwise, I tend to find the tension the author notes frustrating as well.


I am pro async Rust, but we don't use async in our embedded projects at Oxide. This is because of specific design constraints and goals: https://hubris.oxide.computer/reference/#_why_synchronous

That being said, I am also a fan of embassy when you have different design constraints and goals, and consider the fact that is is able to exist and be successful is a massive testament to the design of async Rust.

(We also use async Rust heavily further up the stack, and have some issues with it, but they tend to be disjoint from the way that this is talked about online.)


Oddly, I agree with you - but I think I may be approaching it differently. I use async as a mechanism to be able to have clearly-defined "tasks" in an embedded context, where tasks have straight-line code that handles something. I have most of the interaction between tasks be synchronous; in the case of embassy, the thing it brings is that it manages that otherwise-spaghetti-feeling mix of state machines in an easy-to-read kind of way.

Example from a current side project, since it's not work-encumbered: A wifi-enabled clock light for my 5yo. It has a task that sits there and every hour pings an SNTP server, updating a (mutex-protected) global with the time state. It has another task that listens for a telnet session for various control signals - which also updates a mutex-protected global config state. And it has a task that spins doing LED effects.

With embassy/async, I can write each of those as a separate task, without paying much attention to what gets invoked by interrupt handlers.

(This particular one is an rp2040, but I use it on stm-based systems as well).

I feel like this is kind of analogous to the discussion of threads-vs-events as a mechanism for structuring code vs. threads as a mechanism for achieving parallelism. :-)

Edited to add: Or, perhaps an alternative view of what I'm doing is that I'm using the embassy runtime as a really lightweight alternative to an RTOS, since I mostly haven't met an RTOS I don't want to throw across the room. An argument against what I'm saying here is, "well, use Hubris as your embedded OS and then you can have tasks and they can be synchronous" - which seems entirely fair.


> I use async as a mechanism to be able to have clearly-defined "tasks" in an embedded context

This is also a good design! The primary designer of Hubris also has a project that works like this: https://github.com/cbiffle/lilos


What I like about Embassy is the Metapac, and less reliance on generics and typestates than predecessors.

My complaint is I don't find the async ergonomics intuitive; it feels like a layer of misdirection. And, the viral character.


I genuinely don't understand the problems people have with async code... at least for anything involving http/api requests it's largely just a matter of decorating stuff with 'async/await'. It makes a few things difficult (iterators with futures, ugh), but mostly it's easy.


Because rust is a systems language, it is common for people to use it for things that don't involve http or even networking in general. Alternatively, people often use it for wasm. Before async and specifically tokio, a much larger percentage of crates were usable by those people. This feels like something nice has been taken away or that our use case has been marginalized. The answer is to spend the time to create tokio and/or async free alternatives to these crates, but that was not expected to be necessary, so it also is added work. All of that is irritating, and the constant stream of "just accept it and get with the program. Stop complaining!" Is really quite infuriating. The fair answer is that rust should be what the majority of its users want/need it to be. But people who just need a back end language have a dozen or more good ones to choose from. People who need a systems language have far fewer choices, especially if you want memory safety, so having the aims of the language diverted from that end is irritating to say the least. I do suspect a fork to occur someday for those reasons, probably centered around the use in linux android or windows.


Well, you can block on it if you want

That said, I disagree on the usefulness of async: in my experience it does the job and it's my default setup.

There's a complex project where I ended up just using threads because I needed to squeeze performance out but overall async is great for tasks, sequence of async operations.

Still, it had some rough edges: (it's been a while but) using Async Closures wasn't pleasant and I re architected my app to not use them as a result. There's an argument to be made for this change making my application easier to reason about - but overall it's poor language flexibility.


> However, if a library uses async, you have little choice but to make your whole project async. This adds to its horrible reputation.

No.

This is how yo do: you look what executor your dependency is using (most likely tokio) you add it to your cargo.toml (at zero cost, since it's already there in your dep) and then you wrap the async library calls in `block_on` and call it a day. You don't need to change a single other line in your project.


You breeze over the dependency weight by abusing "zero cost" to mean "sunk cost". They're not the same!


They are. If you're using a dependency you're using your dependency dependencies, there's no way around it.

If it matters to you, you don't have the same priority as your dependency's author anyway and probably shouldn't be using it in the first place, and it has nothing to do with async.


This was not my experience. I had a working program that used the reqwest crate for a very basic synchronous-is-fine "just give me the contents of this web page" function. When I tried upgrading it to a newer reqwest that had decided async was the way to go, I found that that nice easy to use synchronous function had simply disappeared entirely, and I seemed to have to figure out async in order to use the library now. Luckily it was only a toy project so I just left it at the older version of the library. But as my first exposure to "async" as a rust feature it was a pretty off-putting one.


For reqwest, using the feature "blocking" will enable the reqwest::blocking API.


Reqwest still has a block client available [1]. I assume you upgraded across a major version, and are mad that there were backwards breaking changes? Don't do that if you don't want to update your code to account for changes in the dependency.

[1]: https://docs.rs/reqwest/latest/reqwest/blocking/index.html


Another viable strategy is to start a single-threaded tokio executor and treat it like another thread that you communicate with over (flume) channels.


Yes this works too. People complaining about async being contaminant are often so much prejudiced against it that that haven't even tried to understand the basics.


I have plenty of gripes about async in rust, but one of the things it's surprisingly good at is isolation of async runtimes. There's no reason you can't have multiple tokio runtimes, or transient runtimes, in your application.

Now if only the other warts were fixed, like the particularly poor compiler errors when there's an issue within an async function...


Yup. I was blown away when I spun up five different tokio runtimes in the same app all communicating through channels. I expected at least the tokio-aware channels to be problematic but not in the slightest. It's now humming in prod doing its business. Once you get the basics, it's quite easy to lego stuff out and get the best of both worlds.


Having written an async library (as in, the non-blocking stuff and state, that to the caller looks like async)... Understanding it isn't the easiest.


Yeah writing the underlying machinery is definitely much more complicated! Most users of async libraries don't need to worry about this though


I am in doubts about async too. For better understanding I have implemented my own executor and io library (even with some tricky async destruction) and I am not quite happy with it. The problem is that in rust there are no other good methods to write safe async code, are there? My impression that borrow checker forbids lots of patterns I know from other language.


doesn't block_on work for you?


What on earth are you talking about? Async is amazing. Super fast servers in Actix with multiple DB connectors performing simultaneous queries is super easy to set up.

So many people complain about Rust being hard. What on earth are you people on about? This isn't at all a hard language.

I'm so sick of this "async is hard" / "Rust is hard" meme. No, it isn't.

Maybe it's inconvenient if you're used to slapping packages together and calling it a day. But I don't think the majority of us do work like that.

And it's not hard.


The point can be made equally well without the elitist condescension.


It's not elitist.

I'm sick of people perpetuating this about the Rust ecosystem. It's not a month that goes by that there isn't some article badmouthing the language and its features.

Stop it.

We want our language to gain support in enterprise so we can get paid to write it as our day job. These articles and negative comments are not helping.


This article is by a Rust contributor. I'm sure they want to see Rust be more widely adopted as well. Async is pretty great, but it's not perfect, and in some situations its shortcomings cannot be worked around easily.


I think the comment this person was responding to is completely off topic for my blog post, and extremely shallow. I share the person you're responding to's frustration with how async Rust is talked about on Hacker News.


I also agree with echelon, and I've been working with async rust daily for two and a half years. It was a bit painful before tokio 1.0, but by and large it's consistently great to work with, especially now. My only complaints remain having to restructure closure-based code into loops and occasional weird lifetime issues caused by the `#[async_trait]` macro, but hopefully the latter will go away with stabilization of `async_trait`.

The amount of "async rust is impossible to use," "async rust is fundamentally broken," etc. commentary that comes up on HN is absurd. I have trouble squaring my own experience with it, and it makes me think people are either complaining just to complain or that they haven't worked with it in anything other than toy contexts. I have a hard time imagining how difficult it would be to maintain our ~100k line rust codebase (http proxy that processes requests & responses, makes DB queries, and makes http requests to other services) and keep any kind of predictable performance by spawning threads, using channels, etc.

Like, not trying to be elitist, but I feel like people are missing the benefits for the sake of piling on or something.


It sounds like you are saying "I like Rust and want to be able to write it professionally, so shut up and stop criticizing it, regardless of how valid your criticism is".

That's no way to do good work.


Choosing a technology because it’s what you want to work in is never proper engineering for non-college projects. Choose the best technology to…you know…create a quality product for your customer. You can get paid that way too.


Your comments boil down to "I think rust is easy, stop giving your opinions that don't agree".

You didn't make any arguments here at all. You can check my other comment for detailed reasons why async in a language is not a good approach to concurrency. The overview is that is just isn't a holistic solution and the only time it will solve someone's problem is if they have extremely simple concurrency needs in the first place and those never scale or change.


The way I see their comment, and they'll have to forgive me if I'm wrong, is that the "async is bad" posts are generally not great.

a) They often focus on problems that, at least for me (and many others) are not significant. For example, acting like writing "+ Send + Sync + 'static" is causing your hands to seize in pain. Or they ignore that you can "block_on" a future.

b) They're then often interpreted by people who have very little context on async or rust at all. Look at the initial comment of "Async is a wart", it's idiotic. Look at how stupid the comments section is, how people are not talking about literally anything that boats wrote.

c) They make propositions that aren't very helpful. They mostly say "it was a mistake!" or "we don't need multithreading" or "we want TPC".

Contrast with Boats' post, which is far more contextual, far more historical, and proposes actual solutions and future work to be done.

It's quite frustrating to see the same sorts of complaints over and over again, especially when they're often not super great complaints to begin with.


I don't think that's what they are saying, I think that's what you're saying. I would be frustrated by that too, but I don't see any of that here.


Well they're responding to someone saying "Async is a wart".


You are absolutely right.

Integrating 'async' into a language is not a good approach to concurrency. If you want to run a single function on a different thread it can work. If you want something to run after that, that can work out.

Once you go beyond that, you are building a graph with very crude tools. Then you have lots of problems, including how you handle packaging dependencies when something you want to run depends on data coming from multiple other async sources.

Treating this as a language issue and not a library and tools issue is a huge mistake. Another reason is exactly what you outlined - libraries getting infected with a languages half baked concurrency solution instead of doing what they are supposed to while the user can fit them in where they want.

What actually works is graphs that handle dependencies and data structures for synchronization, but ultimately those need to be done well too.


Is there any way async could be deprecated vs piling on more language/stdlib features?


In theory anything could happen, sure, but that would effectively kill the largest current use-case of the language in industry. Others are catching up, but it would be an absolutely disastrous decision.


> An async generator is a natural transformation from a generator: just like functions, generators can be marked async, and now you can use the await operator inside of them.

I don't see how that makes it a natural transformation.


For some reason the Rust project seems to be plagued by glacial development speed, with features taking years and years to be stabilized after design and often even after initial implementation.

Not sure why, progress used to be way faster years ago.

For example, this "four year plan" should be implemented in 6 months at most, not 4 years.

And the "long-term features" that "should be considered carefully, could not be addressed in the next few years" (lol, seriously?!?) should have serious work start right now and be released as part as the next edition in 2024 (since they need an edition break to be ergonomic). These are basic misdesigns of the type system that should have been fixed 10 years ago.

Whoever is managing and paying for the Rust developers needs to fix this.


> For some reason the Rust project seems to be plagued by glacial development speed, with features taking years and years to be stabilized after design and often even after initial implementation.

I wonder if it feels this way just because Rust has such a fast development cycle. But like, most languages takes years and years to do something like async. Rust went extremely fast, relative to other languages.

> For example, this "four year plan" should be implemented in 6 months at most, not 4 years.

That seems kind of insane. What language goes from idea to shipping major features in 6 months? Who even wants that?


When you want to ship features that essentially will have to be maintained forever, and can't be meaningfully changed after shipping, you don't rush into things without a strong belief that you've gotten things as right as you can.

Consider the bit in OP about wanting to add a Move trait and deprecate Pin, and reverse the semantics so types are immovable by default. That's a difficult change to make now, after async has been stabilized. Obviously in this case the longer, deliberate process didn't save them from this (apparent) mistake, but overall new big language features should never be rushed.

> For example, this "four year plan" should be implemented in 6 months at most, not 4 years.

Now that just sounds reckless to me.


> Not sure why, progress used to be way faster years ago.

Yes, and the language was qualified as unstable for evolving so quickly. Move fast and people complain, move slow and people complain. They lose both ways.

I find the development cycle to be quite good: they ship features and improvements and refinement, and I don't need to rewrite my codebase every three months because something subtly broke.


Because "stabilized" means that the feature is effectively frozen and every subsequent change must be backwards compatible. Even edition changes are restricted by the need to ensure that the vast majority of code at least can be forward-ported by automated means. So you really don't want to stabilize a new feature unless you're super confident that every part of it has the absolute best design.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: