
Asynchronous Programming in Rust book - guifortaine
https://rust-lang.github.io/async-book/
======
est31
There isn't _too_ much activity on this book [1] but I definitely think that
more documentation about async programming in Rust is needed. Just recently I
wanted to do something in async Rust and it's just such a PITA. I'm writing
Rust since 3-4 years now and async throws me back to those first days where I
didn't know how to cope with the error messages. Hopefully async/await syntax
will improve this experience, but even then I think that documentation is
needed. The futures crate is severely underdocumented. I'd love to have an
example snippet next to each combinator etc.

[1]: [https://github.com/rust-lang/async-
book/commits/master](https://github.com/rust-lang/async-book/commits/master)

~~~
etxm
The documentation is asynchronous.

I’ll show myself out.

~~~
agumonkey
await oriented pedagogy

------
shadowmint
Is it just me, or is the fact that the most important part:

[https://rust-lang.github.io/async-
book/getting_started/state...](https://rust-lang.github.io/async-
book/getting_started/state_of_async_rust.html)

Is missing, somewhat ironic?

Feels very much like the state of async matches the state of the guide. :P

What _is_ the state of async? Is it close? Is it still changing with the
futures 0.3-beta not finalized?

Are we six months away? A year?

~~~
portmanteaufu
You can track the progress of the remaining issues here:

[https://areweasyncyet.rs/](https://areweasyncyet.rs/)

~~~
shadowmint
I know, but I still struggle to get a handle on the state of it really.

What _is_ going on with futures 0.3? Why is everyone still using 0.1?

How does that relate to these issues?

It superficially appears like the whole async story is still in a concept
stage...

~~~
Animats
It's the open source approach to upgrading. All the cool kids are focused on
version N+1, which doesn't work yet. The users still on version N don't get
support any more because only losers use version N. You see this pattern
frequently in open source. The Python 3 debacle spent five years in that
state.

Commercial products tend to avoid this. Sales of version N go way down before
version N+1 is generating revenue. Overall revenue drops during the
transition. That's not good.

~~~
steveklabnik
Note that this isn't what's happening here; tokio is explicitly supporting
"version N" in your terminology. Which is why your parent is asking why people
still seem to be using the "old" version.

(Also, there's a compatibility layer, so even the people that want to play
with the shiny new N + 1 can do so, even though it's not explicitly directly
supported.)

------
Tarean
This approach to async programming feels like a much more leaky abstraction
than the 'it's basically semaphores' stuff for m:n threads. Though being able
to do so much as a library is nice.

How does async translate calls to other async functions? Is refactoring into
smaller async functions less efficient? If not, how does it deal with
(possibly indirect) recursive function calls? Does it give up or select a loop
breaker?

And what is the purpose of the pingpong between executor->Waker->push onto
executor?

I am also still unsure what the approach to multithreading might be. Multiple
executors with work stealing or one dispatch executor with worker threads or
something else still?

~~~
steveklabnik
> How does async translate calls to other async functions?

There's nothing special going on. Remember, async on a function is something
like

    
    
      async fn function(argument: &str) -> usize {
    

to

    
    
      fn function(argument: &str) -> impl Future<Item=usize> {
    

so, when you call an async function, you get a Future back. That's true even
if it's inside of another async function.

> If not, how does it deal with (possibly indirect) recursive function calls?

Recursive calls to async functions will fail to compile:
[https://github.com/rust-lang/rust/issues/53690](https://github.com/rust-
lang/rust/issues/53690)

That said, see that discussion; the trait object form will probably eventually
work.

Heavy recursion isn't generally Rust's style, since we don't have guaranteed
TCO, so you threaten to overflow the stack and panic.

> Is refactoring into smaller async functions less efficient?

That's a complicated question. It really depends. I don't think it should be,
thanks to inlining, but am not 100% sure.

> And what is the purpose of the pingpong between executor->Waker->push onto
> executor?

Right now, the best resource is
[https://boats.gitlab.io/blog/post/wakers-i/](https://boats.gitlab.io/blog/post/wakers-i/)
and [https://boats.gitlab.io/blog/post/wakers-
ii/](https://boats.gitlab.io/blog/post/wakers-ii/)

> I am also still unsure what the approach to multithreading might be.

You have options! Tokio now does multiple executors with work-stealing by
default, in my understanding.

~~~
Tarean
Thanks for your answers!

From the second blog post I actually found [https://github.com/tokio-
rs/tokio/pull/660](https://github.com/tokio-rs/tokio/pull/660) which switched
tokio from 1 reactor+worker threads to n reactors with work stealing.

------
Dowwie
This book is largely a work in progress, still -- note all of the TODO's.
Async-await designs have to stabilize before the rest is done.

------
cjohansson
Sounds like an interesting book, the Rust Programming Language book was a
great read ([https://doc.rust-lang.org/stable/book/](https://doc.rust-
lang.org/stable/book/))

------
arve0
How much performance is gained by going async instead of blocking threads on
modern hardware?

Skimmed through [https://vorner.github.io/async-
bench.html](https://vorner.github.io/async-bench.html). If I understand it
correctly, one get about twice the performance with async.

Is this correct? Seems like a compromise (code complexity vs performance) not
worth taking.

~~~
bluejekyll
async isn't only about performance, but has other advantages, like reduced
resource consumption. In addition to that async io also gives you better
control over how to cancel io reads and writes on systems where the IO is not
interruptible.

But you are correct, if you don't have a specific need, async is generally
harder than using threads for concurrency. Ideally the async/await work in
Rust is going to make that trade-off less extreme than it is today, which may
mean more people will feel comfortable using it as it should reduce boiler
plate.

~~~
hobofan
> but has other advantages, like reduced resource consumption

Could you expand on that? I've never heard that mentioned about async before.

~~~
steveklabnik
You can think of a task as being a thread, but it has one single allocation
that’s the exact possible stack size. No more, no less. This uses less memory
than spinning up a thread with the default stack size. Yes, you could use the
proper APIs and get the correct size too, but you have to figure that size out
by hand for each thread. It just implicitly happens with tasks.

~~~
arve0
Default stack size for new threads is 2 MB, [https://doc.rust-
lang.org/std/thread/#stack-size](https://doc.rust-lang.org/std/thread/#stack-
size). That means total_memory_usage = 2 MB * number_of_cores, which is not so
much on modern hardware.

~~~
arve0
...and of course you know that, being on the Rust core team, just adding the
specifics (I did not know the numbers myself).

~~~
steveklabnik
Hm, maybe I misunderstand what you're getting at; you're talking about one
thread per core, not one thread per unit of work? Sure, if you only have that
few threads, then it's not that big of a difference, but if you want to spin
up a few hundred thousand of them...

~~~
arve0
> you're talking about one thread per core, not one thread per unit of work?

Yes, a thread pool, consisting of one thread per core/computing unit. The
units of work are then scheduled between the threads. Units of work here being
some kind of IO, e.g. servicing HTTP requests.

> but if you want to spin up a few hundred thousand of them...

Hm. Thought there was a limit for work that can be done concurrently by the
CPU, based on the number of cores/hyper-threads available. Found this on
threads and IO performance [1], it seems to make the same point.

What kind of work load is common to spread over so many threads (on the same
machine)? Does the OS switch efficiently between hundred of threads on regular
CPUs? Genuinely interested.

1:
[https://www.jstorimer.com/blogs/workingwithcode/7970125-how-...](https://www.jstorimer.com/blogs/workingwithcode/7970125-how-
many-threads-is-too-many)

~~~
steveklabnik
Okay so, there are lots of ways to do this kind of stuff. A threadpool is a
pretty classic one. Apache being the poster child here in an HTTP server
context.

> Thought there was a limit for work that can be done concurrently by the CPU,

Right. But in an IO bound scenario, the CPU isn't doing work; it's waiting on
IO. So, because threads are generally heavy, you don't want a ton of them,
taking up memory, doing nothing.

But, when you have lightweight threads, you can spin up one per connection.
This ends up being simpler, and you don't have the large memory usage. This is
what nginx does, in a sense. It still has a worker per core, but each of those
workers can handle thousands of requests simultaneously, because it's all non-
blocking.

That limit to concurrent work is exactly why non-blocking architectures are so
important, and task systems fit into them really nicely.

~~~
bluejekyll
Excellently said, Steve. This is a great thing to know in this context,
“Latency numbers ever programmer should know”:
[https://gist.github.com/jboner/2841832](https://gist.github.com/jboner/2841832)

------
kingosticks
I think I'm correct in saying you don't _need_ tokio for async but it seems
all non-toy code uses it. Are there any alternatives to tokio out there for
writing real async code or is the idea to build everything on it? As if it was
std, but it's not... right?

~~~
steveklabnik
Sorta, kinda. One big example of when you wouldn't use Tokio is when you don't
have an operating system.

Tokio is a good, default choice, but some projects may have different needs.

------
amelius
Also known as: collaborative multitasking.

