Hacker News new | past | comments | ask | show | jobs | submit login
Async-std: an async port of the Rust standard library (async.rs)
349 points by JoshTriplett 36 days ago | hide | past | web | favorite | 234 comments



I must be dumb, because every time I dive into async/await, I feel like I reach an epiphany about how it works, and how to use it. Then a week later I read about it again and totally lost all understanding.

What do I gain if I have code like this [0], which has a bunch of `.await?` in sequence?

I know .await != join_thread(), but doesn't execution of the current scope of code halt while it waits for the future we are `.await`-ing to complete?

I know this allows the executor to go poll other futures. But if we haven't explicitly spawned more futures concurrently, via something like task::spawn() or thread::spawn(), then there's nothing else the cpu can possible do in our process?

[0] https://github.com/async-rs/async-std/blob/master/examples/t...


A good example is say you want to handle 100k TCP sessions concurrently. You probably don't want to launch 100k threads considering the overhead in doing so and constantly switching between them. You also don't want to do things synchronously as you'll constantly be waiting on pauses instead of doing work on the 100k sessions. So you launch 100k instances of it as an async function and they all stay in a single thread (or couple of threads if you want to utilize multiple cores for the work) and instead of constantly waiting on pauses it simply works on the log of queued up events.

Same code flow just it allows you to launch the same thing multiple times without having to wait for the whole thing to finish sequentially or wait on the OS to handle your threads.


This is missing a crutial explanation that the underlying OS API are asynchronous.


Yeah for sure. In Java/C# I see people do this all the damn time. Use async method for REST endpoints then make a blocking DB call. Or even worse, make a non-async REST call to another service from inside an async handler.

As soon as you do that, your code isn't async anymore. And if you're using a framework like Vert.X or node that only runs one thread per core you're in big trouble.

The most reasonable answer I've seen to all this is Java's Project Loom. An attempt to make fibers transparently act like threads, so you can use regular threaded libraries as async code.

Rust is going to have the same problem Java does with async. A lot of code was written way before async was available, and it not always obvious whether something blocks.


It's possible to write crappy code, async or otherwise.

In my c# world, I use async methods for REST endpoints, which in turn use async calls for anything IO-bound (database, message bus, distributed key store, file system etc). I think more often than not, it's done correctly.


A message broker works here when you want async behaviour but you are integrating with sync code. To use your REST example, you receive the call, send a message to DoSomething and then immediately return http 202, perhaps with some id the ui can poll on (if required). Meanwhile, the DoSomething message queue is serviced by a few threads.


That works but it’s an uncommon pattern. Most people prefer to wait in my opinion. I single DB worker doing batch updates would probably be enough.


Does this mean that rust async is using poll/epoll/kqueue under the hood?


Yes, the executor will use whatever io multiplexing the platform provides (and it’s been coded to support).

If the executor is Tokio, it’s built on mio which will use one of kqueue, epoll or iocp depending on the platform: https://docs.rs/mio/0.6.19/mio/struct.Poll.html#implementati...


Strictly speaking, it's not tied to any particular method. It depends on your executor. That said, the most popular executor does use epoll/kqueue/iocp. (tokio)


Doesn't the executor need to be aware of all the different mechanisms that can be used to poll, and so there's an implicit coupling between the async function implementation and the executor?

For example, socket.read() might return a future that represents a read on a file descriptor. I don't know the internals of Rust's async support at all, but presumably the future is queued up and exposes some kind of trait that Tokio et al can recognize as being an FD so it can be polled on using the best API such as epoll_wait() or whatever.

But let's say there's some kernel or hardware API or something that has a wait API that isn't based on file descriptors, and I implement my own async function get_next_event() that uses this API. Do I need to extend Tokio, or the Rust async runtime API, to make it understand how to integrate this into its queues? In a non-FD case, wouldn't it have to spawn a parallel thread to handle waiting for that one future, since it can't be included in epoll_wait()?


I slightly mis-spoke in a sense, yeah. This stuff has changed a bunch over the last few years :)

So, futures have basically two bits of their API: the first is that they're inert until the poll is called. The second is that they need to register a "waker" with the executor before they return pending. So it's not so much that the executor needs to know details about how to do the polling; but the person implementing socket.read() needs to implement the future correctly. It would construct the waker to do the right thing with epoll. Tokio started before this style of API existed, and so bundles a few concepts in the current stack (though honestly, an integrated solution is nicer in some ways, so I don't think it's a bad thing, just that it makes it slightly easier to conflate the pieces since they're all provided by the same package.)

Async/await, strictly speaking, is 100% agnostic of all of this, because it just produces stuff with the Futures interface; these bits are inside the implementation of leaf futures. And executors don't need to know these details, they just need to call poll at the right time, and in accordance with their wakers.

I can't wait until the async book is done, it's really hard remembering which bits worked which way at which time, to be honest.


> And executors don't need to know these details, they just need to call poll at the right time, and in accordance with their wakers.

Is this true? Essentially this is claiming that an executor does not need to use mio (epoll/kqueue/...) to be able to execute futures that do async network i/o.

So who uses mio? Would each type implementing the Future trait use mio internally as a private detail? That is, using two such future types, would they maintain multiple independent kqueues and the executor isn't able to put them both in one?


mio is useful when you’re writing an application that runs on windows/Linux/Mac. But you can use futures on any platform, including embedded ones. Those would be coded against whatever API the system offers. There’s an embedded executor, for example.

Tokio uses mio to implement its futures that do async IO, so if you use Tokio, you use mio. You don’t have to use Tokio, though it is the most popular and most battle tested.

Many futures don’t do IO directly; for example, all of the combinator futures. Libraries can be written to be agnostic to the underlying IO, only using the AsyncRead/AsyncWrite traits, for example.


The TL;DR: is that, while the `std::future::Future` trait is generic, the actual type that implements this trait is often tied to a particular executor.


This really helped my practical understanding - Thanks!


async/await are coroutines and continuations (bear with me).

Here is synchronous code:

    result = server.getStuff()
    print(result)
Here is synchronous code, that tries to be asynchronous:

    server.getStuff(lambda result: print(result))
Once server.getStuff returns, the callback passed to it is called with the result.

Here is the same code with async/await:

    result = await server.getStuff()
    print(result)
Internally, the compiler rewrites it to (roughly) the second form. That's called a continuation.

That's pretty much it.

A more involved example.

Synchronous code:

    result = server.getStuff()
    second = server.getMoreStuff(result+1)
    print(result)
Synchronous code that tries to be asynchronous:

    server.getStuff(
        lambda result: server.getMoreStuff(
          result+1, 
          lambda result2: print(result2)
    ))
A lot of JS code used to look like this hideous monstrosity.

Async/await version:

    result = await server.getStuff()
    second = await server.getMoreStuff(result+1)
    print(result)
Remember again, that it is basically transformed by the compiler into the second form.


Thanks. Helpful. My question is, in this example:

    result = await server.getStuff()
    second = await server.getMoreStuff(result+1)
    print(result)
`await getStuff()` MUST terminate before `await getMoreStuff() ` begins. So this chunk alone is analagous to synchronous code, unless we're in the middle of a spawned task, and there are other spawned tasks in the executor that can be picked up.


Yup, your understanding is correct. That code behaves equivalently to the synchronous version, and the only benefit is that the thread can run other tasks while it's waiting for the getStuff() and getMoreStuff() to come back.

Async/await is really popular in the JavaScript community because in web apps, you usually only have a single thread of execution which you share with the browser UI code. So if your code made a network request synchronously, the user might not be able to scroll or click links or anything until it finished.


Yes, the idea is that the thread that is executing this piece of code can "steal" other work when it is awaiting on either of those methods.

Frankly, in the case of sequential flow like the above, I would rather write

  result = server.getStuff()
  second = server.getMoreStuff(result+1)
  print(result)
and have the runtime automatically perform work-stealing for me. No need for awaits. They just litter the code. This is what Go does.


Thus the FFI impact on Go when going over language boundary and workarounds like runtime.LockThread().


Go does not do that implicitly, there is an explicit "go" syntax.

Gevent in Python does something similar [implicit switching] using a dirty monkeypatching. It is great while it works. Sooner or later the explicit cooperative concurrency such as provided by async/await syntax wins (e.g., asyncio, trio, curio Python libraries)


In Go you need to manually tell the runtime to spawn a goroutine with the `go` keyword, which also «litter» the code…


Except in practice, go keyword is used much more coarsly and sparingly because you can group all the block of function calls under one big go call. With async, every single function has to be flagged as beeing asynchronous and be called differently ( although maybe some modern languages have a way to group all the await calls ?)


That's funny how gophers can at the same time defend the «explicitness» of the if-based error handling, and be annoyed to have syntactic annotations for the yield points for coroutines (because that's exactly what `await` is, versus the yield points silently added by the go compiler everywhere so the runtime can perform its scheduling).


not sure who you're refering to.. i certainly don't like many aspects of the go language. goroutines and the "go" keyword isn't one of them.


Correct, if there's nothing else that can be picked up it'll behave essentially the same as the non-async code. But it'll mean that adding other things to be done at a later stage is much much easier than trying to do so without it.


I don't understand your examples of "Synchronous code that tries to be asynchronous". In fact, the examples you provided are of asynchronous code being...asynchronous. Callbacks are asynchronous (or to be fully correct, I should say that they allow one to program asynchronously, which is exactly what async/await does).

Indeed, since you mention continuations, I'm sure you realize that they're more or less callbacks.


This is an incredibly helpful explanation. I went from not really knowing what all this mumbo jumbo was about to a useful mental model. Cheers!


    server.getStuff(gotStuff)
    server.getMoreStuff(gotMoreStuff)
Just using functions is more simple and also more powerful. A function being async usually means stuff will happen, things can go wrong, and you might want to do different things depending on the response or whether it failed.

Where await is useful though is in serial execution of async functions that really should be sync, but them being async is an optimization in order to not block the thread.

It is really unfortunately that so much extra crud had to be introduced to JS (corutines, async, promises) in order to be able to await. Reading complex Promise based code is very unplesent, with async functions pretending to be pure, without any error handling, full of side effects, and omitted returns.

With callbacks we had inexperienced programmers writing pyramids of callbacks and if logic. But it was not that bad, as the complexity was in your face, and not hidden under layers of leaky abstraction.


So, the power of async await is no greater than the thread pool manager sitting under it.

You are correct. In edge cases where there is only 1 await in the queue for 1 process with 1 thread you gain nothing.

But you're accurately describing an edge case where await has limited value.

Await's true power shows up when you anticipate having multiple in-flight operations that all will, at overlapping points, be waiting on something.

Rather than consume the current thread while waiting, you're telling the run-time, go ahead and resume another task that has reached the end of its await.

This was possible before using various asynchronous design patterns, but all of them were clunky, in that they required boilerplate code to do what the compiler should be able to figure out on its own:

"Hey, runtime. This is an asynchronous call. Go do something useful with this thread and get back to me."

Second, await is MUCH EASIER for future developers to process because it looks exactly like any other method call and makes it easy to reason about the logic flow of the code.

Rather than chasing down async callbacks and other boilerplate concepts to manually handle asynchronous requests, the code reads like its synchronous twin.

int a = await EasyToFollowAsyncIntent();

This makes the code much easier to reason about.

To me those are the 2 biggest gains from async.

1. Less boilerplate code for asynchronous calls.

2. Code remains linearly readable despite being highly asynchronous.


In small examples like this, you don't gain anything. For the sake of the example, we just run one task. But you _could_ run 100 with them. And at each of those `awaits`, they could schedule differently.

For a more complex networked application, we have the tutorial here: https://github.com/async-rs/a-chat


Wouldn't it make more sense to show an example that actually takes advantage of async/await? I don't get why they are using examples that need a disclaimer like you can run 100 jobs for this to make sense. So it should include that in the example (and it should probably do something that makes sense if it's run a hundred times).


The example is intended for you to be able to implement it, not as a showcase.

I think the expectation with Rust async-await at the moment is likely that people are familiar with async syntax from other languages e.g. Python - it's not even in beta yet, you need to be running nightly to get the syntax.


Yeah if there are no other futures spawned, then the await is going to cause the app to just sit there until the future completes. It's got nothing better to do.

If there was another future spawned then the await would cause the runtime to sit there until either of the futures completed. The code would attend to the first future that completes intil that hits an await.


And where it all comes together is when async lets us write concurrent functions that compose together well in a way that functions that can block on IO and lock acquision do not.


How so? The same API is trivial to implement using threads and futures. A future, after all, is just a one shot channel or rendezvous point.

Async is just a way to get cooperative threads compiled to a static state machine, trading lower concurrent utilization and throughput for less context switch overhead and lower latency.


Right, that specific instance is essentially a single-threaded* application.

Now imagine that you spawned a few hundred of them with JoinAll. Each would run, multiplexed within a single thread, with execution being passed at the await points.

* anyone know the correct nomenclature for this? Single-coroutine?


Cooperative threads are still threads, they just aren't preemptive.


I suppose you're right. The example is both single-threaded at the OS level and single-threaded at the program level.


Serial?


Async is a hard concept but what can be revealing is going through the three steps:

1. Get used to callbacks in NodeJS, for example write some code using fs that reads the content of a file, then provide a callback to print that content.

2. Get used to promises in NodeJS, for example turn the code in #1 into a promise by creating a function with that calls the success/reject handler as appropriate on the callback from opening that file. Then use the promise to open the file and use .then(...) to handle the action.

3. Now do it in async. You have the promise, so you just need to await it and you can inline it.

By doing it in the 3 steps I find it is more clear what is really happening with async/await.


Say you want to listen to a socket and also receive input from the keyboard at the same time.

If you have an async method that can wait for input on both devices, you can await the results of both of them, and they won't block eachother.


> I must be dumb

Nope, async really isn't trivial.

> I know .await != join_thread(), but doesn't execution of the current scope of code halt while it waits for the future we are `.await`-ing to complete?

It doesn't, that's the charm of it.

It's best to treat 'await' as syntactic sugar, and to dig in to the underlying concepts.

I realise we're not talking C#/.Net, but that's what I know: in .Net, your function might do slow IO (network activity, say) then process the result to produce an int. Your function will have a return-type of `Task<int>`. Your function will quickly return a non-completed Task object, which will enter a completed state only once the network activity has concluded and processing has occurred to give the final `int` value.

The caller of your function can use the `Task#ContinueWith` method, which enqueues work to occur if/when the Task completes, using the result value from the Task. (We'll ignore exceptions here.)

Internal to your function, the network activity itself will also have taken the form of a standard-library Task, and our function will have made use of its `ContinueWith` method. Things can compose nicely in this way; `Task#ContinueWith` returns another Task.

(We needn't think about the particulars of threads too much here, but some thread clearly eventually marks that Task object as completed, so clearly some thread will be in a good position to 'notice' that it's time to act on that `ContinueWith` now. The continuation generally isn't guaranteed to run on the same thread as where we started. That's generally fine, with some notable exceptions.)

You might think that chain-invoking `ContinueWith` would get tedious, as you'd have to write a new function for each step of the way if we make use of several async operations - each continuation means writing another function to pass to `ContinueWith`, after all. Perhaps it would be more natural to just write one big function and have compiler handle the `ContinueWith` calls.

You'd be right. That's why they invented the `await` keyword, which is essentially just syntactic sugar around .Net's `ContinueWith` method. It also correctly handles exceptions, which would otherwise be error-prone, so it's generally best to avoid writing continuations manually.

There's more machinery at play here of course, but that seems like a good starting point.

Assorted related topics:

* If you use `ContinueWith` on a Task which is already completed, it can just stay on the same thread 'here and now' to run your code

* It's possible to produce already-completed Task objects. Rarely useful, but permitted.

* There's plenty going on with thread-pools and .Net 'contexts'

* The often-overlooked possibility of deadlocking if you aren't careful [0]

* None of this would make sense if we had to keep lots of background threads around to fire our continuations, but we don't [1]

* Going async is not the same thing as parallelising, but Tasks are great for managing parallelism too

* This stuff doesn't improve 'straight-line' performance, but it can greatly improve our scalability by avoiding blocking threads to wait on IO. (That is to say, we can better handle a high rate of requests, but our speed at handling a lone request on a quiet day, will be no better.)

I found this overview to be fairly digestible [2]

[0] https://blog.stephencleary.com/2012/07/dont-block-on-async-c...

[1] https://blog.stephencleary.com/2013/11/there-is-no-thread.ht...

[2] https://stackoverflow.com/a/39796872/

See also:

https://docs.microsoft.com/en-us/dotnet/standard/parallel-pr...

https://docs.microsoft.com/en-us/dotnet/api/system.threading...


> It's best to treat 'await' as syntactic sugar, and to dig in to the underlying concepts.

Slight word of warning: `async/await` is more than just sugar in Rust, it also enables borrowing over awaits, which was previously not possible.


Interesting, thanks.


Nerding a bit more, it was a bit late yesterday: this is why the Future takes this weird type called a `Pin`, which is a guarantee that the value does not move in memory while the Future is polled. This is also one of the reasons the feature took so long, Rust previously only had ways to detect potential moves in memory, but could not disallow them.

https://doc.rust-lang.org/std/future/trait.Future.html#requi...


Interesting ideas. I really must learn Rust properly.


async/await is all about letting you write serial looking code with the smallest memory footprint short of writing hand-coded continuation passing style (CPS) code.


User space threads and corouting is pretty much the right abstraction for application code.

Async can be useful when more control over the details of execution is needed.


Async/await and futures/promises (and before that, stuff like Java's Executor abstraction) are being added to a lot of languages because it is very difficult for even experienced developers to manage threads in a bug-free way.

I've seen a lot of people try to manage complex programs by working with threads directly. Whatever they come up with is very unlikely to be as correct and reliable as the abstractions provided by the language. Even when they get it right, programs written with those techniques are difficult to modify without introducing new bugs.

Manual management of threads is becoming like manual management of memory -- it is discouraged by newer language features and you should only do if you really need to.


This remind me of the blog post "What Color is Your Function?"[0], they had to create a different library that is the same as the standard library but with async functions.

I thought Rust had other, better ways to create non-blocking code so I don't understand why to use async instead.

[0] https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...


> I thought Rust had other, better ways to create non-blocking code so I don't understand why to use async instead.

In fact, Rust does have a great solution for nonblocking code: just use threads! Threads work great, they are very fast on Linux, and solutions such as goroutines are just implementations of threads in userland anyway. (The "what color is your function?" post fails to acknowledge that goroutines are just threads, which is one of my major issues with it.) People tell me that Rust services scale up to thousands of requests per second on Linux by just using 1:1 threads.

Async is there for those who want better performance than what threads/goroutines/etc. can provide. If you don't want to deal with two "colors" of functions, you don't have to! Just use threads.


Threads are bad for high concurrency. Specifically when you need to call out to another service that has some latency.

Say you have 1000 threads. To handle a request each one needs to make 50ms of external or DB calls. In one second, each thread can handle 20 calls. So you can handle 20k requests/second with 1000 threads. But Rust is so fast it can serve 500k requests a second. So with regular threads, you need ~25,000 threads. The OS isn't going to like that.

With async you can run a single thread per core, with no concurrency limits. So you get your 500k requests without overhead. With fibers you just run 20k fibers which is a little bit of overhead but easy to do.

This is the core reason everyone is pushing async and fibers in fast languages. When you can push a ton of requests/second but each one has latency you can't control, regular threads will kneecap performance.

In "slow" languages like Python, Ruby, etc, async/fibers don't really matter because you can't handle enough requests to saturate a huge thread pool anyways.


Java services have managed for a long time to do just fine. Usually you just have dedicated threadpools for those db/ whatever calls.

But yes, eventually, for very heavy cases (more than what I would call "high") you will want async/await.


Which can still be done via java.util.concurrent (Callable, Futures, Promises, Flow) until Project Loom arrives.


25,000 threads are perfectly fine on Linux.


it is the memory associated with a posix thread that becomes the limit.


Why only 1000 threads? Why not 10k or 100k?

With 8k stack for each, you can easy have 10k-100k threads in a low-end system


Let's be real here, its not just the memory requirements, because context switching and the associated nuking of cpu caches are not free. You can go very far with it nowadays, but you can go much farther with async code, if you really need to.


Nobody is denying that async code is faster. But it’s not as dramatic as presented in the grand parent post.

And IMHO the added code complexity is not worth the trouble.


We have some pretty vanilla file upload code that needs async. S3 latency is fairly high. If you're uploading a few tiny files per user per second, thread usage gets out of hand real fast.

With a simulated load of ~20 users we were running over 1000 threads.

Several posts in the chain say that 20k+ threads is "fine". Not unless you have a ton of cores. The memory and context switching overhead is gigantic. Eventually your server is doing little besides switching between threads.

We had to rewrite our s3 code to use async, now we can do many thousands of concurrent uploads no problem.

Other places we've had to use async is a proxy that intercepts certain HTTP calls and user stats uploader that calls third party analytics service.

Just sayin it's not that unusual to need async code because threading overhead is too high


In what language?


Java


> And IMHO the added code complexity is not worth the trouble.

The thing is, this is just that - your opinion, generalized as The Truth. But engineering is about making the right trade-offs. Often threading will be fine, you'll win simplicity, and all is good. But sometimes you really need the performance, or your field is crowded and its a competitive advantage. Think large-scale infrastructure at AWS, central load-balancers, or high-freq-trading.


It goes deeper than that. There is plenty of research showing that shared memory multithreading is not even a viable concurrency model. The premise that threads are fine and simple is just false.


I'm not sure what you mean. One of Rust's major research contributions is to show that shared memory multithreading is a perfectly viable concurrency model, as long as you enforce ownership discipline to statically eliminate data races.


> The thing is, this is just that - your opinion, generalized as The Truth.

Heh? Where?


In Python I use async instead of threads for reasons unrelated to performance. https://glyph.twistedmatrix.com/2014/02/unyielding.html


> The "what color is your function?" post fails to acknowledge that goroutines are just threads, which is one of my major issues with it.

"""Three more languages that don’t have this problem: Go, Lua, and Ruby.

Any guess what they have in common?

Threads. Or, more precisely: multiple independent callstacks that can be switched between. It isn’t strictly necessary for them to be operating system threads. Goroutines in Go, coroutines in Lua, and fibers in Ruby are perfectly adequate."""

What more do you need?


As an aside, Zig[0] recently merged a change to try and tackle the use case of [a]sync-agnostic functions: https://github.com/ziglang/zig/issues/1778. The "What Color is Your Function" blog post appears to have been one of the inspirations behind the change. Some of Andy's (the language creator) recent videos go over it in detail: https://www.youtube.com/channel/UCUICU6mgcyGy61pojwuWyHA

[0]: https://ziglang.org


That looks quite interesting!

The idea is that you can set the global "io_mode" mode to blocking, mixed or evented and I/O functions will switch their implementation accordingly. The type of the function will then, if I got that right, propagate up the call stack and turn functions that touch it transparently into either normal or async/awaitable funtions.

Nice way to avoid a bifurcation of the ecosystem into red/green functions. Its a bit magical maybe, any other trade-offs?


Please excuse my cynicism, but how can a global variable for switching between blocking and async be considered interesting?

I mean, don’t you know at compile-time whether you want something to be async or not? If so, it should be handled by the type system, not by mutating a variable at runtime.


I believe this is a compile time setting. The point is to avoid the red/blue function color issue, which has implications for all functions upstream of I/O functions. It literally is about the type system. Did you even read the linked text?


One trade-off at the moment (if I understood this correctly) is that function pointers lose this transparency, and you have to be explicit about them being async (or not). Maybe the plan is to lift this restriction in the future? I am not privy to that.


None of the 5 points in that article about callbacks in 2015 node.js apply to async in Rust. The Rust people spent years agonizing over their version of async and applied a lot of lessons learned from implementations in other languages.

https://news.ycombinator.com/item?id=20676641

It's trivial to turn async into sync in Rust. You can use ".poll", "executor::block_on", et cetera.

Turning sync into async is harder in any language. Even Go with it's easy threading. That's a good argument to make async the default in libraries in Rust, but since async isn't stable yet, that would have been hard to do 5 years ago.


Literally none of your rebuttals actual rebut the claims. The very fact that async-std exists is absolute proof that the issue remains--if there wasn't a "color" problem there wouldn't be any need for a "port" of the standard library.

Rust had legitimate reasons for taking the approach that they did. One can agree that they made the correct decision without excusing and obscuring the consequential costs.


The async-std is taking the “always red” approach that the article mentions and that wasn’t possible until now due to async being hot of the presses. The rest of the arguments in the article are based on the point that “red functions are more clumsy to call” which doesn’t hold for Rust, but holds for JavaScript.


The article explicitly admits that async/await is ergonomically much nicer than explicit futures/promises. But the color problem still remains, one consequence of which is duplication of code and interfaces.

Arguing that the problem doesn't exist if you only stick to functions of a single color isn't a rebuttal, it's an admission! But the fact of the matter is async functions have real limitations and costs, which is why they're not the default in Rust, which in turn is why any Rust program will always have some mix of differently colored functions. But, yeah, the fewer of one color and the more of the other color, the better. That's the point.


Once again, the article is about JavaScript and everything it says still holds today, introducing async/await didn't change anything. Sync functions can only call other sync functions and use the result immediately. To use an async function, you have to convert the caller to an async function too which can be anything from annoying to impossible.

So yes, Rust still has colors, but it doesn’t matter because a red function can call a blue one without a problem and vice versa. You’re right in saying that async functions have a cost and shouldn’t be used indiscriminately - so just use them when it makes sense. As opposed to JavaScript, Rust doesn’t make you commit to one or the other early and either face major refactors in the future or pay the price of async when it’s not required.

P.S. I think there are some caveats for library authors and also to blocking the thread on a single future, but maybe more qualified people can comment on those.


I think the point is that "colored" functions only existed because Rust did not previously have async support. Now that it has async support, new code can be one color: async, while maintaining ergonomics.

Maybe new code will be exclusively async and existing code will switch over.


Not all new code shouldn't be async. I write graphics code. There is no benefit to me, or any of my users, if all of my code is async. No system has 10,000 simultaneous GPUs to drive independently.


I agree with your general point, but I do want to point out (as I'm sure you're aware) there's plenty of asyncronous logic in graphics code.

Some (but not all) of which might even benifit from async... although graphics code has it's own solutions to many of these problems, and it certainly wouldn't be the bread and butter of your core render loop.

1) For performance reasons, your GPU consumes command buffers after a decent delay from when your CPU requests it. This means async logic crops up for screenshot/recording readbacks, visibility queries, etc. assuming you don't want to simply stall everything and tank your framerate.

2) New lower level graphics APIs expose the asyncronous logic of command submission more than ever before, limiting safe CPU access to memory based on what the GPU is still accessing. This sometimes spills into higher level APIs - e.g. bgfx buffer uploads can either take a reference to memory (fast) - which you must keep valid and unmodified for a frame or two (asyncronous, and currently difficult to expose a sound+safe API for to Rust) - or it can make an extra deep copy (perf hit) to pretend it's behaving in a more syncronous fashion.

3) Resource loading is heavily asyncronous. You don't want to stall a game out on blocking disk I/O for a missing minimap icon if you can just fade it in a few seconds later. I might not have 10,000 GPUs to drive, but I've certainly had 10,000 assets to load, semi-independently, often with minimal warning.


We can never eliminate sync stuff, because async requires a runtime. Async is great when you need it, but you don't always need it, and you shouldn't have to pay the cost if you don't plan on using it.


> but holds for JavaScript.

Held, in 2015 but doesn't any longer since js had async/await.

This blog post isn't really interesting anyways, and its popularity mainly comes from the zealotry of gophers.


Nope, it still holds. It’s in fact impossible to call an async function from a sync function and return the result. To use await you have to make the function async which means the caller needs to be async-aware and so on, all the way to the top of the stack.

There are hacks like “deasync”, but I personally wouldn’t use it.

https://github.com/abbr/deasync

Rust can block on an individual future so, say, a sync callback can still take advantage of async functions.


But you don't need `await` to call an async function, you can use a regular function call in a symc and then the function returns (synchronously) a Promise.

What cannot be done is to perform a blocking call on a Promise from a sync function. And that is by design because JavaScript has a single threaded runtime.


Given history of JS not being able to call an async function from sync function is a none issue. JS went from callbacks to promises to async/await (sugar on top of promises).


> that would have been hard to do 5 years ago.

Five years ago Rust still had green threads. Literally every standard library I/O function was async, and the awaits were always written for you with no effort.

Its literally taken five years to get back to an alpha thats not as good, and we'll still have to wait for a new ecosystem to built on top of it. I know not everyone writes socket servers and so forcing the old model on everyone probably doesn't make sense long-term, but I still have to shake my head at comments like this.

https://github.com/rust-lang/rfcs/pull/230


Green threads have no place in a low level systems language like Rust whose design goals are zero cost abstraction and trivial C interop.

D made a similar mistake by requiring GC/runtime from start and now even though they added ways to avoid it the ecosystem and the language design are "poisoned" by it an itmakeas it a very hard sell in some places where it could be sold as a C++ successor.

Because rust made the right choice in time it's now a contender in that space, if it chose to go down the runtime required/custom threading model route it would have much less practical appeal. If you can swallow runtime/threading abstraction overhead why not just bolt on a GC and use Go


Many systems have been developed in systems enabled GC languages.

C++11 introduced a GC API in the standard library, and one of the biggest C++ game engine does use GC in their engine objects, Unreal.

C++ on Windows makes heavy use of reference counting (which is a GC algorithm from CS point of view), via COM/UWP.

The biggest problem to overcome is religious, not technical.


>C++ on Windows makes heavy use of reference counting (which is a GC algorithm from CS point of view), via COM/UWP.

Not sure if Ref counting is a good example here, as there is no runtime monitoring the object graph hierarchy and of course Rust it’s self uses ref counting in many situations.


Chapter 5 of "The Garbage Collection Handbook", one of the GC gospel books.

Reference Counting is a garbage collection implementation algorithm from CS point of view.

RC has plenty of runtime costs as well, cache invalidation, lock contention on reference counters, stop the world in complex data structures, possible stack overflows if destructors are incorrectly written, memory fragmentation.


> Its literally taken five years to get back to an alpha thats not as good

The new I/O system is better in several ways. First, as you acknowledged, not everyone writes servers that need high scalability. M:N has no benefit for those users, and it severely complicates FFI. Second, async is faster than M:N because it compiles to a state machine: you don't have a bunch of big stacks around.


Yes, its better in several ways, but its also worse in several ways. It will take another five years to build a robust ecosystem for servers, and you'll still have to be careful not to import the wrong library or std module and accidentally block your scheduler. Plus the extra noise of .await? everywhere.

I'm not saying it was the wrong decision five years ago, but it definitely was a choice and there could have been a different one. I was responding to someone who said async wasn't an option five years ago.


M:N was slower than 1:1 in Rust. That's why it was removed. The problems you cite are problems of async/await, but they can be addressed by just using 1:1 threads.


I don't think M:N forces a stack. The stack no stack is called stackless coroutine vs stackful coroutine.

M:N is the parallelization level. I'm actually not sure if Rust is M:1 or M:N or both based on configuration.

M is the number of concurrent process in the language, basically the number of user thread. These user threads can be implemented to be stackful or stackless, up to the language. The N is the number of OS threads.

At least that's always been my understanding.


I do Rust since 2013. It did actually had two half-baked runtimes as a compile time mode.

It also was constantly crashing and had weird semantic issues. I very much prefer the current state, even if I'm a bit sad that async/await has taken us so long.


I used "5 years ago" as a code for "the first time I played with Rust". Obviously not 5 years ago, then. It's pretty amazing how far it's gone so quickly.


> It's trivial to turn async into sync in Rust. You can use ".poll", "executor::block_on", et cetera.

Is it 0-cost abstraction? I mean, is `sync_read` will compile to the same code like `async_read.poll`? Because turning sync into async is kind of trivial as well: just spawn new thread for that sync block.


0-cost abstraction was summarised by Stroustrup as:

> What you don’t use, you don’t pay for. And further: What you do use, you couldn’t hand code any better.

In that mind-set, it is completely okay that `sync_read` and `async_read.await` can totally compile to something different, as they abstract different things.

Boats has some more thoughts on this here: https://boats.gitlab.io/blog/post/zero-cost-abstractions/


That's a tricky question. My understanding is that while it's (in theory, modulo compiler bugs and features) a 0-cost abstraction over different underlying system APIs, those different underlying system APIs aren't necessarily the same cost. For example, if I'm trying to read from a socket in the synchronous world, I just issue the `read` system call. But in the async world I'm going to do quite a bit more:

- Create an epoll descriptor.

- Add my socket to that descriptor.

- Poll the descriptor for a readiness notification.

- Read the descriptor.

Those first three system calls weren't required in the synchronous version, and unless the read is large enough to overshadow them, they represent some additional cost. But that cost is required by the OS itself, not by Rust's abstractions.

Someone with more experience writing Mio code might want to jump in and correct me here though.


You're pretty much correct, there is a tradeoff here. There's reasons why many high-performance systems like databases are mixed systems.


Spawning a new thread for an operation is not async in the sense people typically mean. For an async IO library, you would expect it to be using async IO primitives like epoll, not just wrapping blocking operations in a thread.


That’s what I like about Go. You write sync code, but because Go routines aren’t OS threads they operate with the efficency of async code.


> You write sync code, but because Go routines aren’t OS threads they operate with the efficency of async code.

No, they don't. Goroutines have stacks, while Rust async code does not. Go has to start stacks small and copy and grow them dynamically because it doesn't statically know how deep your call stack is going to get, while async/await compiles to a state machine, which allows for up-front allocation. Furthermore, Go's M:N scheduling imposes significant costs in other places, such as FFI.

Besides, for the vast majority of apps, OS threads are not significantly different from goroutines in terms of efficiency. Rust doesn't have a GIL and per-thread startup time and memory usage are very low. It only starts to matter once you have a lot of threads—as in, tens of thousands of clients per second—and in that case it's mostly stack size that is the limiting factor.


> OS threads are not significantly different from goroutines in terms of efficiency

This is not true for a use case with a lot of connections, additionally context switch cost a lot more now with all side channel attack mitigations on.


People are happily running Rust servers in production using thousands of concurrent threads per second.


At the same time, many others do need more power; that’s why async/await is asked for so often.


I don't doubt that but apparently we have different definitions of a lot. Additionally latency and hardware also matters, I can say that C is doing n things a second while PHP is doing the same but C is running on EC2 micro instance and PHP is running on 2x Intel Xeon Platinum 9282 dedicated machine. C10K problem was not solved by 1:1 model and this is an old problem. C100K+ is what I see in some of production systems I work on.


OK, but, I mean, we've done this experiment, and we found that M:N in Rust was slower than 1:1.


Thousands isn't much.


You can scale up to tens of thousands of threads. But if that isn't enough for your application, then you can use async!

M:N threading was slower than 1:1 in Rust.


AFAIK, that's just for file I/O, which doesn't really work well with epoll.


Correct. It's also common practice.


If you're writing sync code the last thing you're thinking about is costs associated with these APIs.

Sync is a rudiment of our close past. We use it when we need to shave off development costs.


> Turning sync into async is harder in any language.

well in most languages you can wrap sync into async. so it's not "hard". it's just harder to have NON blocking code. i.e. in c# there is a difference between:

`await Task.Run(() => Thread.Sleep(5000));`

and

`await Task.Delay(5000);`

both will wait for 5 seconds but one will waste cpu cycles while the other won't.


> well in most languages you can wrap sync into async. so it's not "hard"

It is not easy to do in a cirrect and performant way. "async" doesn't mean "code that runs in another thread". You can have a single threaded runtime running async code (that's usually the case for javascript).

The "async-ness" is in those cases provided by the use of non-blocking primitives for IO, network etc. If a function is making a blocking call to the file system even if you make it async it will not help since the main thread will still be blocked on that system call.

The performance will also be quite different: waiting for data on 10000 sockets in a non-blocking way is quite different from having 10000 threads doing the same.


Including in rust


> Turning sync into async is harder in any language.

Elixir's Task module (in the stdlib):

    future = Task.async(fn ->
      do_something_here
    end)
    
    ...do_other_things...

    result = Task.await(future, timeout)
Mixing it with the Enum library makes concurrency dead-simple (got I a junior dev dispatching concurrent tasks in scripts with confidence), at the expense of an ugly nested double lambda.

    some_list_of_values
    |> Enum.map(fn value -> 
      Task(fn -> do_something_with(value) end) 
    end)
    |> Enum.map(&Task.await(&1, timeout))


Does Elixer overload IO operations to be async in async contexts? Because that is largely why you cannot just wrap sync code in an async block and call it a day - once it hits a system call the thread is paused but the scheduler cannot tell that it should be dequeued.

This is largely why Python async took so long to mature, because so much inbuilt functionality was making IO operations transparently using core sync impls that locked up any async executor.


I'm still relatively new to the erlang vm so some of the details here might be wrong, if someone wants to correct me, please it's welcomed.

Console IO operations are actually message call to a "global group leader" which performs the operation, so they are async (and atomic). This can sometimes be confusing if an operation (such as logging) has a bunch of middlemen with an IO operation as a side effect. It's worth the atomicity, though, so none of your IO calls are interrupted by another IO call. Also, if you run a command on a remote node which dispatches IO as part of its own process, the IO will be forwarded back to its group leader (which is on your local node), which is useful for introspecting into another VM.

Disk IO is also different; each open file descriptor effectively gets its own "thread" that you send IO messages to. There are ways to bind a file descriptor directly to your current "thread", but you "have to be more careful when you do that" - you do that if performance is more important (and I have done this, it's not terrible if you are careful).

Network IO is also different; the erlang VM kind of has its own network stack, if you will, but you can set up a socket to be its own "thread" or you can bind a socket into a thread so that network packets get turned into erlang messages.

Handling blocking is all done for you by the VM, which is preemptive and tries to give threads fair share of the VM time.

When people say that programming the erlang VM is like doing everything in its own os, they aren't kidding. Except unlike linux, where your communications are basically limited, you get to interact with your processes via structured data types with coherent language (and also IPC calls are way cheaper than OS processes).

> Does Elixer overload IO operations to be async in async contexts?

Maybe the right way to answer this is: When in Elixir, presume everything is async.


This incurs runtime overhead and boilerplate, so while it’s not as hard as other languages, it’s still harder.


like what, a few microseconds? What are you doing where you're awaiting for things in parallel where that matters? HPC? We're dispatching things that take on the order of minutes. Typically a local network request has 10-20 milliseconds of latency on our office LAN, so whatever. Clean and comprehensible code with very little boilerplate is more important when I'm reviewing my junior's code.


Well, thats a hard sell for Rust because it specifically advertises itself as a C++ replacement which means no overhead or runtime.


I think if you're striving for that, then a bit of complexity is warranted. Not everything has to be simple, and async is hard to do correctly without the correct abstractions. Honestly, though I was hoping Rust would go with the Actix way of doing things, but that's fine. You don't have to use Rust's async.


Elixir Tasks act closer to a very lightweight threadpool dispatch, rather than the coroutine style of async/await in other languages. An Elixir task doesn't, iirc, share memory with other tasks and won't block if you make it spin.

This makes it a hell of a lot easier to reason about.


There are ways to get around this, the way that I have done it in Nim is via a `multisync` macro:

    proc readLine(s: Socket | AsyncSocket): Future[string] {.multisync.} =
      while true:
        let c = await s.recv(1)
        case c
        of '\n':
          return
        else:
          result.add(c)
This is equivalent to defining two `readLine` procedures, one performing synchronous IO and accepting a `Socket` and another performing asynchronous IO and accepting an `AsyncSocket`. It works very well in practice.


This is why I'm curious about algebraic effects which was recently discussed on /r/rust [0]

The main challenges I see are around usability within the language design on how best to propagate and compose them.

[0] https://www.reddit.com/r/rust/comments/cjcwmu/is_there_inter...


I found out about algebraic effects a year or two ago when I ran across the efforts to bring them to OCaml. Async/await was a preliminary thing so I though algebraic effects would be a more complete solution and more in line with the Rust ethos. After asking around in some NYC meetups and on Reddit , my impression is that there isn't a lot of appetite to break additional new ground in terms of bringing fringe language features (I'm unaware of a non-academic language that features them in a production release, OCaml is closest AFAIK) into a language that already has a fairly high barrier to entry.


The point of asynchronous programming is to know exactly where concurrency happens in your code. This both eliminates concurrency bugs and gives you predictability for high performance.


Cooperative multitasking, like it's 1995 again?

No thanks.


Cooperative multitasking is great within a single application, it's when you don't have preemptive multitasking between applications that you have the problem seen in the early 90s on early MacOS and Win16.


Asynchronous programming is not cooperative multitasking.


They are very similar. In cooperative multitasking, programs yield the thread to the OS, while async programs yield to the event loop. In both cases, yielding is voluntary. Are there other differences? It's probably quite easy to turn a program written one way into the other.

Edit: I just remembered that in cooperative multitasking, it's probably possible for the OS to safely save the program stack pointer, meaning the program doesn't have to unwind its stack when yielding, unlike async programs. Never mind, that makes the two models quite different. However, in practice, programs written for cooperative multitasking really should be structured just like async programs in order to be responsive (so users can, for example, interact with the GUI while downloading files in the background.)


Even conceptually the models are very different, in one control is just given up and regained unpredictably, while in the other one it is programmed, hence asynchronous programming, not multitasking.


> in one control is just given up and regained unpredictably

Which one? It’s “cooperative” ie not unpredictable. The points where one can block are predictable and documented explicitly, otherwise how would the programmer know they won’t block forever. The same should hopefully be the case for async/awaitable apis.

In fact where async/await will actually give up control are harder to tease out.

The differences are really not as big as they would seem.


In cooperative multitasking you can program when to give up control, not when it is regained. The regaining part is unpredictable. Which introduces a lot of non-determinism to deal with and overhead.


This is no different than async/await. At some point you await a scheduled primitive, it could be a timer, io readiness, an io completion... and yield to a scheduler. You don’t specify explicitly when you return. These are not tightly coupled coroutines. This is precisely what is going on in cooperative multitasking.

I don’t see how this increases overhead to deal with either.

Basically, coop multitasking and async/await operate on the exact same execution framework, the latter just gives convenient syntactic support.

Perhaps you should see how typescript turns async await into js.


Await is just syntactic sugar. You do not really await anything. What actually happens is an event handler gets called on an event, where it sets up more event handlers for more events and so on. This is the essence of asynchronous programming. There are no tasks, no yielding, practically no overhead and everything is deterministic (in relation to external events obviously) [1]. The only cooperative multitasking implementations that have the same amount of determinism are those implemented strictly on top of event loops and that lack yielding function, so they cannot really be called cooperative multitasking implementations, as they can't "cooperate". All actual implementations have yielding, do not get control deterministically (dealing with that non-determinism requires stuff like semaphores) and have relatively significant overhead.

[1] If implemented with care, not doing syscalls in the middle of async primitives and using fast nearly-O(1) algorithms for timers, etc. it can be incredibly fast. And of course Rust also gives enough room to mess up all that nice determinism.


> You do not really await anything. What actually happens is an event handler gets called on an event,

So the event handler gets called immediately? No that’s not right. What would be the point of that? The event handler or continuation obviously needs to be scheduled on something that is awaitable. Meanwhile, other concurrent tasks may be able to run.

> This is the essence of asynchronous programming. There are no tasks, no yielding, practically no overhead and everything is deterministic

This is just totally wrong. Especially re tasks: https://docs.python.org/3/library/asyncio-task.html#creating...

There is nothing inherent about async and await that prevents “yielding”... the issue of yielding and semaphores is a concurrency issue and since async and await are used in concurrent programming environments, the same issues apply.

While it is true async and await don’t require any kind of cooperative concurrent framework to work, that is kind of their whole point for existing. A single task async/await system isn’t terribly interesting.


> So the event handler gets called immediately? No that’s not right. What would be the point of that? The event handler or continuation obviously needs to be scheduled on something that is awaitable. Meanwhile, other concurrent tasks may be able to run.

It's kind of like this: async/await is syntactic sugar for higher-order abstractions around event loops. At the level of event loops and event hadnlers there is no awaiting anymore. And the whole point of event loops is to not run event handlers concurrently, that's why they are even called loops, they invoke handlers one by one in a loop deterministically without concurrent tasks and once there is nothing more to run they just block and wait for new events. Obviously you can run multiple event loops in parallel, but you shouldn't share memory between them, as it defeats the purpose, is always slower and is never really necessary, you can just use asynchronous message passing to communicate between event loops when you have to.

> A single task async/await system isn’t terribly interesting.

And yet this is the whole point of async/await, promises, futures and event loops. All of them exist to avoid mistakes and performance problems of shared memory concurrency. I mean, really, if you have semaphores or mutexes in event handlers, futures, promises or async functions - you are in a broken concurrency model zone.


It is regained in exactly the same cases it would be in the async model: when a blocking operation completes and the scheduler resumes the now ready thread. As scheduler is called executor in the async world, while a thread is a coroutines, but the concepts are very similar.


It is cooperative, so no control is relinquished predictably. The difference is the syntactic limitations of the current async model prevent building abstractions.


How is it not?


Tell that to my infinite loop.


Would you mind elaborating on your opinion here?

As far as I understand, cooperative is far more efficient than preemptive, but unsuitable for poorly written or untrusted code.

I wish to learn and would really appreciate your assistance if you are willing to help.


The key difference is cooperative multitasking lets the program yield the thread anywhere, not just to the event loop like async programming. Arbitrary yielding was a feature that programmers widely abused in the early Windows days. The user would start something in an app that takes some time to complete; the app would freeze for a while, but all other apps remained usable. It was obvious that the programmers, rather than solving the real problem, had sprinkled some yield instructions throughout the program, which allowed the computer to keep working even though the app was unresponsive. It's a good thing that async programming frameworks don't usually allow yielding from arbitrary places.


>It's a good thing that async programming frameworks don't usually allow yielding from arbitrary places.

Well..

    await new Promise((res, rej) => { setImmediate(res); })
(In environments without `setImmediate` this is easily shimmed - https://github.com/YuzuJS/setImmediate)


True. That's the new kind of yield that requires language support and it's only available in async functions. I was referring to what happens when framework or language designers try to allow something like the await keyword in non-async functions; it turns into an epic mess. I know because I tried (as a thought experiment.) :-)


Why not?


The caller of the function knows nothing about what happens within the body of the function. (Is it just doing computation, or is it doing I/O?). The async keyword is how the author of the function makes it explicit that caller should choose when to await the result.

Isn't the alternative WCiYF is proposing to allow the caller to treat any function asynchronously, while having no way to discern whether doing so might be counterproductive?


Rust async functions are also different here. They don't run the code, they create a Future structure in it's initial state. Contrary to e.g. JavaScript, it doesn't start to run until you end up putting it on an executor. So the actual function call does something very different from what happens in other languages.

It's sometimes called "cold futures".



It's similar in some ways, yes. As always, details and labels differ.


The right way to handle this stuff is polymorphism. But with Rust lacking higher-kinded types I guess that's not possible. A decent test of whether their "associated type constructors" actually solve the same problems, as sometimes claimed, would be whether you can write this kind of async-polymorphic code with them.


> I thought Rust had other, better ways to create non-blocking code so I don't understand why to use async instead.

'async' exists because Python has that GIL bullshit and so Python programmers had to invent that fifth wheel of 'async programming'.

Programmers in other languages then got jealous because they, too, wanted a complex, unnecessary framework that pollutes the whole runtime and serves to differentiate regular programmers from 'rockstar' programmers.

And so async got fashionable and barely-literate coders now think async is magic performance dust that will automatically make your program run 1000% faster.

TL;DR - it's just fashion, give it five years and we'll be reading posts about how async sucks and that it's stupid legacy tech invented by bonehead dinosaurs.


Pretty sure the async/await support in C# predates Python.

C# introduced it in 5.0, which came out in August 2012. The Python proposal (PEP 3156) for an async library was posted in 2012, the proposal (PEP 492) for async/await syntax in 2015, and implemented in Python 3.4 and 3.5 respectively, I believe. So C# predates Python by about 3 years.

From what I can gather, Python was influenced by C#. But C# doesn't have a global lock, and that's not why it has async/await.

Edit: Added PEP reference.


There is further discussion about this library on the rust subreddit: https://www.reddit.com/r/rust/comments/cr85pp/announcing_asy...


It looks great.

But it's odd that they do not cite Tokio. I know this isn't an academic paper, but come on have some professional curtesy and discuss the contributions made in prior art.


Apologies if I'm misunderstanding things here, I'm just now getting back into Rust after a couple of years of not using it. Did Tokio really inspire this library that much?


Absolutely. The whole std::future interface has been borne out of years of careful attempts to actually make these abstractions work in real life. async-std didn’t come from a vacuum. It’s a incremental improvement on tokio that benefits from being able to greenfield on top of the newly changed and standardized future trait.

Carl Lerche and the rest of the Tokio contributors deserve a citation.


In case anyone else was curious how you create nonblocking file I/O, it appears to use threads.

I am curious if the number of threads is unbounded, or if they have a bounded set but accept deadlocks, or if there is a third option other than those two that I am unaware of.


Note that the implementation of the runtime itself is subject to change. It's currently an unbounded threadpool.

https://github.com/async-rs/async-std/pulls?utf8=%E2%9C%93&q...


io_uring grew support for buffered IO in recent kernels, so we should have widespread support for this in userspace circa 2025


Except that io_uring is threads running in kernel.

There is no true async I/O on most (if not all) current platforms - it's all threads, either in user space or in kernel space. Sometimes even deliberately, for example polling disk will give better latency compared to waiting for IRQ.


AFAIK Windows handles truly asynchronous buffered IO in some circumstances, but I feel once you're past the point of managing the abstraction or caring about its internal details, it doesn't really matter if there is a tiny chunk of dedicated stack in the kernel, that's a problem for the OS


IOCP uses a pool of quasi-kernel threads (i.e. schedulable entity with a contiguous stack for storing state) with polling, very much like how io_uring and other incarnations of AIO in the Linux kernel work; and for that matter it's not unlike how purely user space AIO implementations work. The benefit of IOCP and io_uring is there's one less buffer copying operation. The biggest benefit of IOCP, really, is that it's a blackbox that you can depend on, and one that everybody is expected to depend upon. So it can be whatever you want it to be ;)


> OCP uses a pool of quasi-kernel threads

Is there any further documentation for it? I would have expected there doesn't need to be a real stack. Only state-machines for all the IO entities (like sockets) which get advanced whenever an outside event (e.g. interrupt) happens and which then signal the IO completion towards userspace. Didn't expect that it's necessary to keep stacks around.


Yes, IOCP looks like true async I/O most of the time. While in reality it will block if file is cached, in cases if code tries to read something like 100+ MB at a time ReadFile calls can take 200ms+. So most "async I/O" frameworks have to wrap IOCP into a user space thread ...


Or disable caching for specified file when calling CreateFile API

https://docs.microsoft.com/en-us/windows/win32/api/fileapi/n...


Also VMS..


O_DIRECT + aio on Linux seems okay for preallocated files, no?


If by aio you mean Posix aio - on Linux it's implemented with user space threads and blocking I/O. Posix aio on BSD systems is implemented as kernel space thread (aio_write/etc are syscalls on BSD, and glibc functions on Linux).

If you mean io_submit, then yes, but in vast majority of cases, actual `io_submit` syscall will block, because of metadata updates, unaligned reads, etc ...


I initially wrote libaio but then thought it would just confuse people. :)

Yes, I mean io_submit, which is what MySQL uses.


Or just detect it and swap out the implementation. Less than a year until an Ubuntu LTS that has it.


I should have written 2035 to make the sarcasm a little clearer :)


Are those really the only options? I'm trying to wrap my head around how using a fixed size thread pool for I/O automatically implies deadlocks but I just can't. Unless the threads block on completion until their results are consumed instead of just notifying and then taking the next task..

I can definitely imagine blocking happening while waiting for a worker to be available, though. Did you mean simply blocking instead of deadlock?


N threads, with N readers waiting for a message that will only come if the N+1 reader (still in the queue) gets a message first.


Thank you for humoring me. I had to sleep on it, but I can see it now. Seems like it would require a really bad design or more likely bad actors (remotes leaving dead sockets open), but it would definitely be possible.

The same scenarios would lead to resource exhaustion if the thread pool wasn't bounded.


But sure one must use an output queue, not synchronously wait for the consumer to consume a result?


The N + 1 readers are all reading different sockets, blocked.


Non-blocking I/O via threads? That’s what we used to call blocking I/O :D


The documentation is great, and the API documentation includes examples for many functions. This is really appreciable. Thank you for that!


Thank you! :)


How does this relate to Tokio [0]? Why should I choose this new library instead?

[0] https://github.com/tokio-rs/tokio


If all you needed from tokio was tokio::net, then async-std could work as a replacement for raw TCP stuff. If you needed the higher-level stuff from tokio like codecs then you'd not have those.

Also, anything from the tokio ecosystem like hyper would not work with async-std.

Edit: I originally had a first paragraph which was wrong. I mistakenly thought std::net::TcpListener is supposed to impl Read / Write.


> async-std has the equivalent of std::net::TcpListener, however it does not appear to actually impl AsyncRead / AsyncWrite. So as of now you can't do anything with it. TcpStream does impl them, at least.

It does implement AsyncRead and Write, because anything with `Read` and `Write` implements it: https://docs.rs/async-std/0.99.3/async_std/io/trait.Read.htm... (that's sadly a little backwards by rustdoc)

The problem is that tokio has their _own_ versions of the AsyncRead and Write traits.

Hyper can best be used with `async_std` through `surf`: https://github.com/rustasync/surf


>It does implement AsyncRead and Write, because anything with `Read` and `Write` implements it: https://docs.rs/async-std/0.99.3/async_std/io/trait.Read.htm... (that's sadly a little backwards by rustdoc)

>impl<T: AsyncRead + Unpin + ?Sized> Read for T {

That's saying that anything that impls futures::AsyncRead impls async_std::io::Read. async_std::net::TcpListener does not impl AsyncRead. (Compare with TcpStream and File which do.)

>Hyper can best be used with `async_std` through `surf`: https://github.com/rustasync/surf

Sure. You also don't need surf since you can directly use futures's compat executor wrapper around tokio's. The point is that you can't use stuff like hyper without the tokio executor being involved.


I'm being dumb, too I misread my owns library API :(. In any case, it's 2:30am here, I'll just had to bed :D.


Actually, I'm just being dumb. TcpListener isn't supposed to impl AsyncRead / AsyncWrite in the first place. std::net's one doesn't impl Read / Write either.


Here's an extension trait to convert rust-std streams to tokio streams, so that they work with Hyper https://github.com/jedisct1/rust-async-std-tokio-compat


Anything Rust gets the post to number 1 spot. What makes Rust special that other programming languages don't enjoy ?


Rust is very ambitious and unusually successful at reaching its ambitions. It's efficient like C/C++, but safer. It's modern like Go, but more expressive and open to metaprogramming. It's often as readable as a scripting language, but doesn't depend on garbage collection. It's a young rising star originating from a great company.


> It's often as readable as a scripting language IMO only if you're using doing simple things the standard library provides utilities for. i haven't found it to be very readable once code grows in complexity, but i'm also not very experienced.


I agree that libraries have a major impact on the perceived readability of a programming language. As an example, it used to be quite messy to issue HTTP requests from Python, but then the Requests library appeared, and suddenly it became much easier to write readable client libraries. Rust code will become more readable as its libraries mature.


> often as readable as a scripting language

I beg to differ. If anything, Rust is often cited as being hard to read.


Same here. Rust code tends to be dense. It has its moments, but I wouldn't compare it to a scripting language.


I'm sitting in a Rust talk at a conference right now, and the speaker has slides comparing the syntax to Ruby/TypeScript/Python. "You already basically know Rust"

A lot of the fancier stuff is very different, but there's fairly close parallels to most of the basic syntax.


Basically everything Algol-descended has close parallels in the syntax. That falls apart as soon as you hit a turbofish. I'm not saying the language or even the syntax is bad, but I think it does earn its reputation for being a little hard to read.


Even the turbo fish is consistent with other languages’ syntax: :: for scopes, <> for type parameters. Regardless, you almost never need to write the turbofish, so I think it also falls under niche syntax.


Code that fits in a slide is hardly evidence for readability.

Not to mention one could hand pick examples which is what I'd expect from slides of a talk.


It’s like Rust is child of Python and Ruby but the real father is C++.


> often as readable as a scripting language

Rust code is littered with things like |_|, (|&(&x, _), &mut, &'a and Result<(), Box<dyn std::error::Error + Send + Sync>>.

It's anything but easily readable.


Unfortunately Ada isn't as trendy.

So we need Rust for younger generations.


> Anything Rust gets the post to number 1 spot

This is bunk. Simply run this search[1] and behold the stream of Rust related submissions that get no play at all; zero comments and no more than one or two up votes. These instances of highly ranked rust stories are actually the exception; no more than one or two a week typically. The rest of the Rust stuff is seen by almost no one.

[1] https://hn.algolia.com/?query=rust&sort=byDate&prefix&page=1...


Rust is the first language basically ever that has a serious chance at displacing C and C++. Combined with a type system that eliminates multiple error classes, and some functional aspects, it's a very interesting language.


1) It's built in large part by Mozilla, which enjoys special love as an OSS company. 2) Rust provides tangible benefits (memory/concurrency safety) over existing languages in an important niche (performance-sensitive programs). 3) The community is nice and talented so it's fun to see what they're up to.


Deterministic memory and object deallocation combined with memory safety, and a vibrant open ecosystem.


A possibility to displace C and Java.

Features like Haskell—destructuring bind, useful type system.

Performance like C, including no GC.


Strong static typing with types inference. Memory allocation manually handled, no garbage collector. C-like performances.

Makes it a good option when reliability and performance matter (think web browser, database or anything at the OS level).


It can be as fast as C++ whilst at the same time suffering from none of these bugs:

https://www.youtube.com/watch?v=lkgszkPnV8g


How does it balance tasks across CPU cores?

The thing I like in Go is that I don’t have to worry about that, it’s all automatic.


It does the scheduling for you. That's why all Futures put onto task through `async_std::task` must be `Send`. That's Rust parlance for "can be safely migrated between threads".

It's not Go, but we know what people like about Go. <3


I am wondering that also, scheduler is of most importance.

Go is such a joy to work with.


Has preemptive scheduling landed in Go ? Because last time I worked with it (Go 1.10) it was still cooperative and you had to worry about it otherwise you could get bitten badly.


In Rust you use async/await with a scheduler like mio that will automatically do that for you as well.


Great library, well done. In case anyone was wondering this is not a [no_std] crate even though it can be used as a replacement for std library calls. I guess it (obviously) can't be since it interfaces with the operating system so much.


Project member here.

It exports stdlib types (like io::Error) where appropriate so that libraries working with these can stay compatible, so `no_std` is not really an option.

The underlying library (async-task) is essentially core + liballoc, just no one made the effort to spell that out, yet.


Would there be any benefit to making a `no_std` option? I can't think of a situation you would want async std and have including std be a problem.


I'm pretty sure that once you don't want `std`, you also want to pick your own scheduler. In this case, you can use the base library (`async-task`) and get started.


The benefit would be you would be making it impossible to wreck your scheduler by calling sync I/O functions in an async task.


I wouldn't rely on that. Next step, someone binds to a blocking database driver and you are back at square 1 again. This is definitely not rigorous.

I would love to see a lint for known-blocking constructs in async contexts, though: https://github.com/rust-lang/rust-clippy/issues/4377

Also, having explicit imports and types that name collide helps there for once.


How would you have a blocking database library that doesn't use the standard library?


Any code in Rust is free to bind to FFI and sockets can be gained through `libc`.


I didn't literally mean "How is that possible?"

I meant: is that a real thing? Is there a database binding out on crates.io that uses no_std ?


SQLite is C code and any Rust usage of it will not play nicely with M:N. This is just off the top of my head. I'm sure there are plenty of other examples.


Yes. E.g. some databases highly recommend using their C library, as they don't consider their protocol specified.

That gap might close, but it will stay with us for years.


unsafe


> It exports stdlib types (like io::Error)

Excellent, good to know!


I really wish we could work out a better way to make no_std easier. I know why io::Error requires std for example, but it makes things difficult.

It would be nice if you could provide your own sys crate so you could even use some of std on an embedded device. If you had say an RTC you could make time related calls work, maybe you'd wire networking to smoltcp etc. Currently you could do that - maybe - but you'd have to modify the Rust standard library.


The stdlib is already just a facade. There's plans to make that more explicit, maybe opening up the way to things like that.

But that plan is severely understaffed, we go so much else to do.


I certainly understand, and alloc for example is great progress. I'll be very happy when we can provide all of std (or I suppose sys isn't it?) by external crates.


very cool! Now we need a dpdk equivalent to really blow away expectations people have of what is fast




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: