First, async/await does NOT mean "threading" or "multiprocessing" or "concurrency". It simply means "using a state machine to alternate between tasks, which may or may not be concurrent." Right?
That makes sense to me.
But say I have written something in Rust that makes use of async/await. And say there is absolutely no IO or multithreading. Say I have some awaitable function called "compute_pi_digits()" that can take arbitrarily long to complete but does not do IO, it's purely computational. Is there any benefit to making this function awaitable? Unless I actually spawn it in a different thread, the awaitable version of this function will behave identically to if it were NOT awaitable, correct?
And one last idea: the async/await pattern is becoming so popular across vastly different languages because it allows us to abstract over concepts like concurrency, futures, promises, etc. It's a bit of a "one size fits all" regardless of whether you're spinning up a thread, polling for a network event, setting up a callback for a future, etc?
Both in JS or Rust, you don't gain anything just by declaring your thing to be async, or awaitable. Your function needs to be built around some kind of "primitive" that explicitly supports the "do something else in the meantime" mechanism. Using "await" on that thing lets your function piggyback on its support, but all your explicit, normal code is synchronously blocking as usual.
In Rust, I think it's a fairly established pattern to turn blocking code, where the blocking part is not some IO action that has explicit support for the futures mechanism, into a asynchronous, awaitable function by punting the work to a threadpool. That makes sense for CPU-bound work as well as IO done by libraries that don't support futures or things like disk IO where the OS might not actually have decent support for doing it in a non-blocking fashion.
I'm not sure if it's the canonical mechanism, but this crate seems to implement what I'm thinking of: https://docs.rs/futures-cpupool/0.1.8/futures_cpupool/
I'm not familiar with Rust but in JS you do gain something. Just the fact that the function is async means that it now explicitly returns a promise, which means that anything awaiting that promise will be in a new execution context and will definitely not run synchronously.
If you introduce suspension points in that (e.g. every 100 computed digits), then you can co-schedule other tasks (e.g. a similar `compute_phi_digits`) or handle graceful cancellation (e.g. if a deadline is exceeded, or its parent task aborted in the meanwhile).
Well, then we can simply optimize away your entire program as it does nothing and running it has no side effects.
Even if your program is entirely CPU bound, there are uses for writing in an async/await style. As an example, parsers can be quite natural to write in that style.
Or you can use it to have multiple computations running at the same time, and give updates on their progress. It is voluntary time slicing, which is significantly less overhead than the OS doing time slicing for you.
Software threads, from the OS to your application, run on one of these cores for a certain amount of scheduled time, then get switched out with some other thread. If threads are waiting on IO then they're not making much use of the time they get and are wasting CPU capacity.
An older approach is to just make more threads and switch them out faster, but this is very inefficient. Await/async is a way to let threads not get stalled by a single function and switch to a different function in that process that does have work available. It's basically another step of granularity in slicing CPU time within a thread.
The keywords do not force anything, they are just signals to the underlying software that it may pause and come back later if necessary, along with setting up state to track results. Some methods may still run all on the same thread if there's nothing else to do, or if the async result is already available and there is no waiting needed.
Most async/await is usually built on top of yield, generators, promises or other constructs that basically are state-machines or iterators.
The async call itself doesn't return intermediate results, though, so you'd have to handle that a different way. And if you want to cancel the task, you need another way to handle that too.
Something like computing the digits of pi would be better represented by a stream or iterator since the caller should decide when it's done.
See http://blog.ploeh.dk/2016/04/11/async-as-surrogate-io/ for further discussion. Regular programmers are not using Task<T> as IO-monadic marker consciously, but they are surprised when a usage differs from that model.
But ultimately, to make use of async, you need async primitives - something that lets you say "do this in the background somehow, and let me know once you're done". Any async/await call should ultimately end at one of those primitives, and it's at that point that another call might get interleaved. If you don't actually do I/O or anything else that can do a non-blocking wait, you're not getting anything useful from async.
Async/await is literally all about explicit continuations. It's not about concurrency or parallelism per se, although it can be used in that context.
C# async/await is also very much resumable state machines
However the execution aspect is a bit different: In C#, once a leaf future/Task gets resolved, it will in many cases sychronously call back into the state machine which awaited the task Task (by storing a continuation inside it). A whole promise chain might resolve synchronously directly on the stack of the caller. And "in many cases", because the whole thing depends on some very subtle properties like whether a SynchronizationContext or TaskScheduler was configured.
In Rusts task system a leaf future will never call back into the parent. It will always only notify the associated task executor, that it can retry running/polling the Future to completion. When the task gets executed again, it will run again from the normal scheduler thread in a top-down fashion.
This makes Rusts system a little less performant for some use-cases, but also a lot less error-prone (no synchronization issues because it's not known where some code runs). It also is one of the key ingredients for avoiding allocations on individual futures.
They will use language-level generators if compiling to ES 2015, or user-land generators if compiling below that.
Is the website author here? What are you running server side that’s giving such great performance?
Pinning is required here because your AsyncRead read_to_end returns a future bound by some reference lifetime?
Yep, the generator created by quote_encrypt_unquote is creating internal self-references from the future created by read_to_end into the AsyncRead it's storing in its environment, while this is happening the AsyncRead must not move and therefore the generator must not move, which is what pinning represents.
But the rest of the event-loop machinery is quite different. JS's async is still fundamentally callback-based. Rust's futures are polled. In JS there's a single global event loop and promises run automagically. In Rust you create futures managed by their executors, each handling its own kind of tasks (CPU pools, network polling).
Would you mind elaborating on the polled aspect of rust futures, or link me to some documation? Do you mean that there is a loop polling the result of a future? How does that work with things like select?
We have some really in depth docs in the works here but it’s not quite ready yet.
It could cause code bloat, but in practice these are small functions that get inlined and optimized out to almost nothing (sometimes even the whole struct disappears).
Without this, you sometimes had to write a write a wrapper function that does some synchronous setup and returns a Future, which was a bit annoying for stylistic reasons.
There's an interesting but somewhat old discussion here:
I wonder if anything changed since then? I'm not a Rust programmer so I didn't really understand the article.
I found out it works also better together with some other features, like "select!" and the current cancellation mechanics. But I can't remember all the details right away, and it might be pretty hard to explain.
So the performance concern simply does not apply- Rust suspends the same number of times as Dart 2. The race condition concern might, depending on how you look at it, but in return the execution model from within a Future is much more straightforward and predictable.
I don’t 100% remember, but I think some of the details for us still ended up significantly different. There are so many ways to implement this stuff...
The pattern-based implementation of `await` is, in my view, the coolest part of the async/await feature set.
Aside: This can be a source of errors in F# code I’ve seen as folks will mix up the cases where they want to result vs running a computation many times. There certainly is plenty of expresivity but in practice it’s a muddier representation (it took me over a year to appreciate this).
"... performing a CPS-like transform where an async function is split into a series of continuations that are chained together via a Future::then method"
they are referring to c# and js implementation of promises/futures here
State machines are also what Clojure(Script) core.async uses.
(Easy choice as there are no continuations available)
Go sort of does the same thing, but insists on running fibers in separate threads at its convenience; which means giving up the lovely simplicity of cooperative multitasking for the same old multi-threaded circus.
I'm unfortunately not aware of any languages more recent than Smalltalk that get this right. My own baby, Snigl , is just getting to the point where it's doable.
And I don't think FFI direction matters much. The moment you have callbacks, your stack has interleaving of languages anyway (i.e. X called into Y which called back into X). Does it really matter which language the innermost and the outermost stack frames belong to? You still need to handle the mix in the middle.
This sounds like a native compiler perspective to me; with pure VM fibers like Snigl's these are not issues.
It's not about direction, it's about controlling the world from the outside.
You sound more like you're on a mission to prove to the world it's impossible, since Rust didn't manage to get it right.
The way goroutines work isn't compatible with Rust's core language goal of zero-cost abstractions. In order for a goroutine to suspend and resume at arbitrary points in otherwise normal functions, each goroutine needs its own stack. This is convenient but comes at a cost, and I certainly wouldn't say that it matches reality or is close to the system.
The way Rust implements async ensures that there is zero overhead. You can think of the compiler as constructing an elaborate state machine. Each task is a normal structure in memory, just a few bytes rather than an 8 kb stack. This means that by paying the cost of dealing with a somewhat invasive language feature, Rust async code is more efficient than the equivalent Go code.
This is part of Rust's core design. The language should be as convenient or productive as possible, but never at the cost of performance or efficiency even if that cost is very small.
Your guesses are incorrect.
How many LOC is tokyo nowadays?