Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder about an altenate timeline where Rust kept its lightweight threads. Marking async calls explicitly instead of marking synchronous calls with await is a step in that direction, because that's also the syntax you have with lightweight threads. What would problem 1 look like in that alternate timeline?

In that case the concept of async functions disappears and your first function becomes a normal function. The second function remains a Future building function. So I'm tempted to conclude that this problem might be a non-problem, caused by a confusion between async functions and Future building functions. Even though an async function desugars to a Future building function, they are conceptually distinct in the lightweight threads model. With lightweight threads, all functions are async functions. A future building function explicitly builds a delayed computation. The types should be different.

An async function is just like a normal function, except that it may call async APIs (i.e. other async functions). Calling an async function from a normal function is an error; it does not return a future. The programmer never sees that async functions are implemented with Futures under the hood. In particular, an async function is not syntactic sugar for wrapping its body in an async block. We rename the async { ... } keyword to future { ... }, which constructs a future out of the block. You may call async functions in a future block. So if you want to call an async function inside a normal function, you must do future { foo() }, making it syntactically clear that the call is delayed even when the call is made from inside a normal function. The programmer no longer needs to think about how async functions work at all. Don't tell them that future{ foo() } actually will just call foo(), and foo() returns the future, they don't need to know that. The only thing they need to remember is that async functions can only be called from within async functions or future blocks. In all other respects they behave the same as normal functions. All delaying of computation and running computation in parallel is explicit.

IMO, problem 1 only occurs to programmers that have been told that async fn = Future returning function. That's a leaky abstraction; it's syntactic sugar. If you prevent them from developing this notion, the problem simply doesn't occur. To understand the main proposal for async/await you basically have to understand what desugaring the compiler is doing. With the "Explicit future construction, implicit await" you can use async functions and futures without understanding how they work under the hood. It's a non-leaky abstraction.

IMO, problem 2 is a problem for the IDE. The IDE can easily show which function calls are async and which are not.



I tried to lean pretty hard into "this syntax is just like threads" in that internals.r-l.org post, when I wrote it, proposing almost exactly what you describe here. Unfortunately problem #1 is not a result of confusion or unnecessary conflation, but a fundamental question of lifetimes- the exact same problem already exists with normal OS threads just as it would with lightweight threads.

That is, a function is always allowed to hold onto its arguments until it returns. If its execution is deferred (e.g. `|| the_function(and, its, arguments)`) for whatever reason (e.g. spawning a lightweight thread or async task), the borrow checker has to consider that those arguments may stick around indefinitely.

Of course, it is 100% doable to force people to work around this just by giving future-building functions a different type. But as I described, this means callers have to add or remove an extra `.run()`/`.await()`/etc. if the API ever switches between the two. This is accepted in the world of threads, but not in the world of futures, because we already have a solution which is "just switch to a future-building function, everyone's already awaiting it."

(Personally, while I certainly see it as a real problem, I would rather we just live with it. It's not hard to work around, and we already do it in the world of threads when necessary, which is rarely.)


I still don't understand why #1 is a problem.

> But as I described, this means callers have to add or remove an extra `.run()`/`.await()`/etc. if the API ever switches between the two.

Switches between what though? When you want to do something asynchronously, you indeed build a future and later .await() it. Suppose you then want to build that future in a different way, for example by transforming future { foo(x) } to making foo(x) itself return the future (i.e. moving the future{} block inside foo), possibly because you want to dereference x before building it. Well, the .await() was already there, and doesn't need to be changed. The future{} ... await() pair gets introduced when you want to making things asynchronous, which is exactly as it should be?

Furthermore, isn't that the same with the main async/await proposal? It is indeed true that when you make things async you only have to mark a function as async, and then all the calls to it automatically become async. However, at the end of the day you still need to await those futures or else they won't do anything. So when you switch from sync to async you still need to add those awaits.

The difference seems to me the other way around: with the main proposal you need more awaits (namely, at all points where you want to stay synchronous). With your proposal you need more async/future blocks (namely, at all points where you want to switch to asynchronous).

I think that using the same keyword for async fn and async{} block is a source of confusion, because it makes it seem like async fn is basically like wrapping the body in an async{} block. It's what makes people think that an async fn is like an automatically awaited future, which is a confusing way to think about it and makes it hard to see why this proposal is a good idea (even if it's actually implemented like that under the hood). I think it becomes a lot clearer if you use a different word for these two concepts (like async fn and future{} block), and remove the ::new() and only use future{} syntax.

This proposal does raise another question: why not just green threads, and remove the concept of async functions entirely?


> This proposal does raise another question: why not just green threads, and remove the concept of async functions entirely?

Making another reply because this is completely unrelated...

Rust already tried that. The problem is that Rust has a hard requirement as a systems language to support, at least, native I/O APIs, and the green threads implementation added a pervasive cost to that support because all standard library I/O had to go through the same machinery just in case it was happening in a green thread.

That overhead made green threads themselves basically no faster than native threads, so they were dropped before 1.0 to make room for a new solution to come along eventually. Futures and async is that solution, and it turns out to be much lighter weight than green threads ever could have been anyway- no allocating stacks, no switching stacks, no interferering with normal I/O.

The syntax could have been different, but the implementation is far better this way.


Couldn't green threads in principle be implemented the same way as your async proposal? The compiler could infer which functions need to be marked async. To support separate compilation it might need to compile two versions of each function, an async one and a normal one. You'd have exactly what you have in your proposal, except you never have to write async fn. You could still have blocking & non-blocking IO. It wouldn't totally unify green threads with OS threads, but Futures/async/await don't do that either.


Yes, though you probably wouldn't call them green threads anymore at that point. (I mean, Rust async/await is implemented that way modulo syntax and it's not called green threads. But that's beside the point.)

In fact Rust has already thrown out separate compilation with its monomorphization-based generics, so making functions "async polymorphic" in the same way wouldn't be anything new.

And while that's somewhat unlikely from what I can tell, Rust is getting a little bit of that "effect polymorphism" somewhere else- generic `const fn`s can become runtime functions when their type arguments are non-const. So maybe someday we'll be able to re-use generic functions in both sync and async contexts depending on their type arguments.


Switches between keeping the args for the function's full duration, or returning a closure (async or not) that doesn't hold onto them.

Here's the problem in terms of normal OS threads:

    fn f<'a>(r: &'a i32) -> i32 { ... *r ... }

    // oh no, I can't do this:
    let i = 42;
    thread::spawn(|| f(&i));
Here's the workaround:

    fn f<'a>(r: &'a i32) -> impl FnOnce() -> i32 {
        let i = *r;
        || ... i ...
    }

    // now I can do this:
    let i = 42;
    thread::spawn(f(&i));
In this case, and the analogous lightweight threads case you're describing, and the "implicit await" post I originally linked, the workaround forces the caller to change its syntax. From `|| f(&i)` to `f(&i)`, or from `async { f(&i) }` to `f(&i)`, or from `future { f(&i) }` to `f(&i)`.

But in async/await as currently proposed and implemented, the transformation goes from this...

    async fn f<'a>(r: &'a i32) -> i32 { ... *r ... }

    // oh no, I can't do this:
    let i = 42;
    task::spawn(f(&i));
...to this:

    fn f<'a>(r: &'a i32) -> impl Future<Output = i32> {
        let i = *r;
        async { ... i ... }
    }

    // now I can do this:
    let i = 42;
    thread::spawn(f(&i));
You can imagine someone originally writing the first version, when all their callers just immediately `await` so it's okay if the reference sticks around. But then another caller wants to write something like the above, so they make the transformation above.

Under today's futures, all the other call sites keep working (`f(&i).await`) and the new use case starts working. Under our proposals, that transformation would break everyone just using the `f(&i)` syntax, so it probably wouldn't happen, and instead the new caller would have to write this:

    thread::spawn(async move {
        // move `i` in here, or worse, stuff it in an Arc, even though it's only needed for setup!
        let my_i = i;
        f(&my_i)
    });


I see. Would that be such a disaster under your proposal? Original code is:

   async fn f<'a>(r: &'a i32) -> i32 { ... *r ... }
Having some callers future{ f(&i) }.await().

Now the new caller comes in, so we add a function f_future:

   fn f_future<'a>(r: &'a i32) -> impl Future<Output = i32>         
        let i = *r;
        async { ... i ... }
   }
The new caller uses f_future and the old callers keep using f. To prevent duplication we can factor out ... i ... into a function g(i) and do g(*r) in the async fn f. The other callers can migrate from future{ f(&i) }.await() to f_future(&i).await() over time.

It's not as ideal as not having to change the signature at all, but signature changes can be dealt with. Or is this a big problem with OS threads?


I agree, there's plenty of ways to work around it, and I'd prefer any of them to the syntactic mess we're in now. I'm not the one making the decisions, though. :)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: