
Async-std: an async port of the Rust standard library - JoshTriplett
https://async.rs/blog/announcing-async-std/#
======
2bitencryption
I must be dumb, because every time I dive into async/await, I feel like I
reach an epiphany about how it works, and how to use it. Then a week later I
read about it again and totally lost all understanding.

What do I gain if I have code like this [0], which has a bunch of `.await?` in
sequence?

I know .await != join_thread(), but doesn't execution of the current scope of
code halt while it waits for the future we are `.await`-ing to complete?

I know this allows the executor to go poll other futures. But if we haven't
explicitly spawned more futures concurrently, via something like task::spawn()
or thread::spawn(), then there's nothing else the cpu can possible do in our
process?

[0] [https://github.com/async-rs/async-
std/blob/master/examples/t...](https://github.com/async-rs/async-
std/blob/master/examples/tcp-client.rs)

~~~
cheez
async/await are coroutines and continuations (bear with me).

Here is synchronous code:

    
    
        result = server.getStuff()
        print(result)
    

Here is synchronous code, that tries to be asynchronous:

    
    
        server.getStuff(lambda result: print(result))
    

Once server.getStuff returns, the callback passed to it is called with the
result.

Here is the same code with async/await:

    
    
        result = await server.getStuff()
        print(result)
    

Internally, the compiler rewrites it to (roughly) the second form. That's
called a continuation.

That's pretty much it.

A more involved example.

Synchronous code:

    
    
        result = server.getStuff()
        second = server.getMoreStuff(result+1)
        print(result)
    

Synchronous code that tries to be asynchronous:

    
    
        server.getStuff(
            lambda result: server.getMoreStuff(
              result+1, 
              lambda result2: print(result2)
        ))
    

A lot of JS code used to look like this hideous monstrosity.

Async/await version:

    
    
        result = await server.getStuff()
        second = await server.getMoreStuff(result+1)
        print(result)
    

Remember again, that it is basically transformed by the compiler into the
second form.

~~~
2bitencryption
Thanks. Helpful. My question is, in this example:

    
    
        result = await server.getStuff()
        second = await server.getMoreStuff(result+1)
        print(result)
    

`await getStuff()` MUST terminate before `await getMoreStuff() ` begins. So
this chunk alone is analagous to synchronous code, unless we're in the middle
of a spawned task, and there are other spawned tasks in the executor that can
be picked up.

~~~
lenkite
Yes, the idea is that the thread that is executing this piece of code can
"steal" other work when it is awaiting on either of those methods.

Frankly, in the case of sequential flow like the above, I would rather write

    
    
      result = server.getStuff()
      second = server.getMoreStuff(result+1)
      print(result)
    

and have the runtime _automatically_ perform work-stealing for me. No need for
awaits. They just litter the code. This is what Go does.

~~~
littlestymaar
In Go you need to manually tell the runtime to spawn a goroutine with the `go`
keyword, which also «litter» the code…

~~~
bsaul
Except in practice, go keyword is used much more coarsly and sparingly because
you can group all the block of function calls under one big go call. With
async, every single function has to be flagged as beeing asynchronous and be
called differently ( although maybe some modern languages have a way to group
all the await calls ?)

~~~
littlestymaar
That's funny how gophers can at the same time defend the «explicitness» of the
if-based error handling, and be annoyed to have syntactic annotations for the
yield points for coroutines (because that's exactly what `await` is, versus
the yield points silently added by the go compiler everywhere so the runtime
can perform its scheduling).

~~~
bsaul
not sure who you're refering to.. i certainly don't like many aspects of the
go language. goroutines and the "go" keyword isn't one of them.

------
Un1corn
This remind me of the blog post "What Color is Your Function?"[0], they had to
create a different library that is the same as the standard library but with
async functions.

I thought Rust had other, better ways to create non-blocking code so I don't
understand why to use async instead.

[0] [https://journal.stuffwithstuff.com/2015/02/01/what-color-
is-...](https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-
function/)

~~~
bryanlarsen
None of the 5 points in that article about callbacks in 2015 node.js apply to
async in Rust. The Rust people spent years agonizing over their version of
async and applied a lot of lessons learned from implementations in other
languages.

[https://news.ycombinator.com/item?id=20676641](https://news.ycombinator.com/item?id=20676641)

It's trivial to turn async into sync in Rust. You can use ".poll",
"executor::block_on", et cetera.

Turning sync into async is harder in any language. Even Go with it's easy
threading. That's a good argument to make async the default in libraries in
Rust, but since async isn't stable yet, that would have been hard to do 5
years ago.

~~~
jeremyjh
> that would have been hard to do 5 years ago.

Five years ago Rust still had green threads. Literally every standard library
I/O function was async, and the awaits were always written for you with no
effort.

Its literally taken five years to get back to an alpha thats not as good, and
we'll still have to wait for a new ecosystem to built on top of it. I know not
everyone writes socket servers and so forcing the old model on everyone
probably doesn't make sense long-term, but I still have to shake my head at
comments like this.

[https://github.com/rust-lang/rfcs/pull/230](https://github.com/rust-
lang/rfcs/pull/230)

~~~
pcwalton
> Its literally taken five years to get back to an alpha thats not as good

The new I/O system is better in several ways. First, as you acknowledged, not
everyone writes servers that need high scalability. M:N has no benefit for
those users, and it severely complicates FFI. Second, async is faster than M:N
because it compiles to a state machine: you don't have a bunch of big stacks
around.

~~~
jeremyjh
Yes, its better in several ways, but its also worse in several ways. It will
take another five years to build a robust ecosystem for servers, and you'll
still have to be careful not to import the wrong library or std module and
accidentally block your scheduler. Plus the extra noise of .await? everywhere.

I'm not saying it was the wrong decision five years ago, but it definitely was
a _choice_ and there could have been a different one. I was responding to
someone who said async wasn't an option five years ago.

~~~
pcwalton
M:N was slower than 1:1 in Rust. That's why it was removed. The problems you
cite are problems of async/await, but they can be addressed by just using 1:1
threads.

------
lachlan-sneff
There is further discussion about this library on the rust subreddit:
[https://www.reddit.com/r/rust/comments/cr85pp/announcing_asy...](https://www.reddit.com/r/rust/comments/cr85pp/announcing_asyncstd_beta_an_async_port_of_the/)

------
limsup
It looks great.

But it's odd that they do not cite Tokio. I know this isn't an academic paper,
but come on have some professional curtesy and discuss the contributions made
in prior art.

~~~
ShinTakuya
Apologies if I'm misunderstanding things here, I'm just now getting back into
Rust after a couple of years of not using it. Did Tokio really inspire this
library that much?

~~~
rough-sea
Absolutely. The whole std::future interface has been borne out of years of
careful attempts to actually make these abstractions work in real life. async-
std didn’t come from a vacuum. It’s a incremental improvement on tokio that
benefits from being able to greenfield on top of the newly changed and
standardized future trait.

Carl Lerche and the rest of the Tokio contributors deserve a citation.

------
evmar
In case anyone else was curious how you create nonblocking file I/O, it
appears to use threads.

I am curious if the number of threads is unbounded, or if they have a bounded
set but accept deadlocks, or if there is a third option other than those two
that I am unaware of.

~~~
slovenlyrobot
io_uring grew support for buffered IO in recent kernels, so we should have
widespread support for this in userspace circa 2025

~~~
dmytroi
Except that io_uring is threads running in kernel.

There is no true async I/O on most (if not all) current platforms - it's all
threads, either in user space or in kernel space. Sometimes even deliberately,
for example polling disk will give better latency compared to waiting for IRQ.

~~~
slovenlyrobot
AFAIK Windows handles truly asynchronous buffered IO in some circumstances,
but I feel once you're past the point of managing the abstraction or caring
about its internal details, it doesn't really matter if there is a tiny chunk
of dedicated stack in the kernel, that's a problem for the OS

~~~
wahern
IOCP uses a pool of quasi-kernel threads (i.e. schedulable entity with a
contiguous stack for storing state) with polling, very much like how io_uring
and other incarnations of AIO in the Linux kernel work; and for that matter
it's not unlike how purely user space AIO implementations work. The benefit of
IOCP and io_uring is there's one less buffer copying operation. The biggest
benefit of IOCP, really, is that it's a blackbox that you can depend on, and
one that everybody is expected to depend upon. So it can be whatever you want
it to be ;)

~~~
Matthias247
> OCP uses a pool of quasi-kernel threads

Is there any further documentation for it? I would have expected there doesn't
need to be a real stack. Only state-machines for all the IO entities (like
sockets) which get advanced whenever an outside event (e.g. interrupt) happens
and which then signal the IO completion towards userspace. Didn't expect that
it's necessary to keep stacks around.

------
jedisct1
The documentation is great, and the API documentation includes examples for
many functions. This is really appreciable. Thank you for that!

~~~
Argorak
Thank you! :)

------
mintplant
How does this relate to Tokio [0]? Why should I choose this new library
instead?

[0] [https://github.com/tokio-rs/tokio](https://github.com/tokio-rs/tokio)

~~~
Arnavion
If all you needed from tokio was tokio::net, then async-std could work as a
replacement for raw TCP stuff. If you needed the higher-level stuff from tokio
like codecs then you'd not have those.

Also, anything from the tokio ecosystem like hyper would not work with async-
std.

Edit: I originally had a first paragraph which was wrong. I mistakenly thought
std::net::TcpListener is supposed to impl Read / Write.

~~~
Argorak
> async-std has the equivalent of std::net::TcpListener, however it does not
> appear to actually impl AsyncRead / AsyncWrite. So as of now you can't do
> anything with it. TcpStream does impl them, at least.

It does implement AsyncRead and Write, because anything with `Read` and
`Write` implements it: [https://docs.rs/async-
std/0.99.3/async_std/io/trait.Read.htm...](https://docs.rs/async-
std/0.99.3/async_std/io/trait.Read.html#implementors) (that's sadly a little
backwards by rustdoc)

The problem is that tokio has their _own_ versions of the AsyncRead and Write
traits.

Hyper can best be used with `async_std` through `surf`:
[https://github.com/rustasync/surf](https://github.com/rustasync/surf)

~~~
Arnavion
>It does implement AsyncRead and Write, because anything with `Read` and
`Write` implements it: [https://docs.rs/async-
std/0.99.3/async_std/io/trait.Read.htm...](https://docs.rs/async-
std/0.99.3/async_std/io/trait.Read.html#implementors) (that's sadly a little
backwards by rustdoc)

>impl<T: AsyncRead + Unpin + ?Sized> Read for T {

That's saying that anything that impls futures::AsyncRead impls
async_std::io::Read. async_std::net::TcpListener does not impl AsyncRead.
(Compare with TcpStream and File which do.)

>Hyper can best be used with `async_std` through `surf`:
[https://github.com/rustasync/surf](https://github.com/rustasync/surf)

Sure. You also don't _need_ surf since you can directly use futures's compat
executor wrapper around tokio's. The point is that you can't use stuff like
hyper without the tokio executor being involved.

~~~
Argorak
I'm being dumb, too I misread my owns library API :(. In any case, it's 2:30am
here, I'll just had to bed :D.

------
rammy1234
Anything Rust gets the post to number 1 spot. What makes Rust special that
other programming languages don't enjoy ?

~~~
hathawsh
Rust is very ambitious and unusually successful at reaching its ambitions.
It's efficient like C/C++, but safer. It's modern like Go, but more expressive
and open to metaprogramming. It's often as readable as a scripting language,
but doesn't depend on garbage collection. It's a young rising star originating
from a great company.

~~~
exacube
> It's often as readable as a scripting language IMO only if you're using
> doing simple things the standard library provides utilities for. i haven't
> found it to be very readable once code grows in complexity, but i'm also not
> very experienced.

~~~
hathawsh
I agree that libraries have a major impact on the perceived readability of a
programming language. As an example, it used to be quite messy to issue HTTP
requests from Python, but then the Requests library appeared, and suddenly it
became much easier to write readable client libraries. Rust code will become
more readable as its libraries mature.

------
jedisct1
How does it balance tasks across CPU cores?

The thing I like in Go is that I don’t have to worry about that, it’s all
automatic.

~~~
Argorak
It does the scheduling for you. That's why all Futures put onto task through
`async_std::task` must be `Send`. That's Rust parlance for "can be safely
migrated between threads".

It's not Go, but we know what people like about Go. <3

------
davidhyde
Great library, well done. In case anyone was wondering this is not a [no_std]
crate even though it can be used as a replacement for std library calls. I
guess it (obviously) can't be since it interfaces with the operating system so
much.

~~~
Argorak
Project member here.

It exports stdlib types (like io::Error) where appropriate so that libraries
working with these can stay compatible, so `no_std` is not really an option.

The underlying library (async-task) is essentially core + liballoc, just no
one made the effort to spell that out, yet.

~~~
thethirdone
Would there be any benefit to making a `no_std` option? I can't think of a
situation you would want async std and have including std be a problem.

~~~
jeremyjh
The benefit would be you would be making it impossible to wreck your scheduler
by calling sync I/O functions in an async task.

~~~
Argorak
I wouldn't rely on that. Next step, someone binds to a blocking database
driver and you are back at square 1 again. This is definitely not rigorous.

I would love to see a lint for known-blocking constructs in async contexts,
though: [https://github.com/rust-lang/rust-
clippy/issues/4377](https://github.com/rust-lang/rust-clippy/issues/4377)

Also, having explicit imports and types that name collide helps there for
once.

~~~
jeremyjh
How would you have a blocking database library that doesn't use the standard
library?

~~~
Argorak
Any code in Rust is free to bind to FFI and sockets can be gained through
`libc`.

~~~
jeremyjh
I didn't literally mean "How is that possible?"

I meant: is that a real thing? Is there a database binding out on crates.io
that uses no_std ?

~~~
pcwalton
SQLite is C code and any Rust usage of it will not play nicely with M:N. This
is just off the top of my head. I'm sure there are plenty of other examples.

------
bfrog
very cool! Now we need a dpdk equivalent to really blow away expectations
people have of what is fast

