Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This remind me of the blog post "What Color is Your Function?"[0], they had to create a different library that is the same as the standard library but with async functions.

I thought Rust had other, better ways to create non-blocking code so I don't understand why to use async instead.

[0] https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...



> I thought Rust had other, better ways to create non-blocking code so I don't understand why to use async instead.

In fact, Rust does have a great solution for nonblocking code: just use threads! Threads work great, they are very fast on Linux, and solutions such as goroutines are just implementations of threads in userland anyway. (The "what color is your function?" post fails to acknowledge that goroutines are just threads, which is one of my major issues with it.) People tell me that Rust services scale up to thousands of requests per second on Linux by just using 1:1 threads.

Async is there for those who want better performance than what threads/goroutines/etc. can provide. If you don't want to deal with two "colors" of functions, you don't have to! Just use threads.


Threads are bad for high concurrency. Specifically when you need to call out to another service that has some latency.

Say you have 1000 threads. To handle a request each one needs to make 50ms of external or DB calls. In one second, each thread can handle 20 calls. So you can handle 20k requests/second with 1000 threads. But Rust is so fast it can serve 500k requests a second. So with regular threads, you need ~25,000 threads. The OS isn't going to like that.

With async you can run a single thread per core, with no concurrency limits. So you get your 500k requests without overhead. With fibers you just run 20k fibers which is a little bit of overhead but easy to do.

This is the core reason everyone is pushing async and fibers in fast languages. When you can push a ton of requests/second but each one has latency you can't control, regular threads will kneecap performance.

In "slow" languages like Python, Ruby, etc, async/fibers don't really matter because you can't handle enough requests to saturate a huge thread pool anyways.


Java services have managed for a long time to do just fine. Usually you just have dedicated threadpools for those db/ whatever calls.

But yes, eventually, for very heavy cases (more than what I would call "high") you will want async/await.


Which can still be done via java.util.concurrent (Callable, Futures, Promises, Flow) until Project Loom arrives.


25,000 threads are perfectly fine on Linux.


it is the memory associated with a posix thread that becomes the limit.


Why only 1000 threads? Why not 10k or 100k?

With 8k stack for each, you can easy have 10k-100k threads in a low-end system


Let's be real here, its not just the memory requirements, because context switching and the associated nuking of cpu caches are not free. You can go very far with it nowadays, but you can go much farther with async code, if you really need to.


Nobody is denying that async code is faster. But it’s not as dramatic as presented in the grand parent post.

And IMHO the added code complexity is not worth the trouble.


We have some pretty vanilla file upload code that needs async. S3 latency is fairly high. If you're uploading a few tiny files per user per second, thread usage gets out of hand real fast.

With a simulated load of ~20 users we were running over 1000 threads.

Several posts in the chain say that 20k+ threads is "fine". Not unless you have a ton of cores. The memory and context switching overhead is gigantic. Eventually your server is doing little besides switching between threads.

We had to rewrite our s3 code to use async, now we can do many thousands of concurrent uploads no problem.

Other places we've had to use async is a proxy that intercepts certain HTTP calls and user stats uploader that calls third party analytics service.

Just sayin it's not that unusual to need async code because threading overhead is too high


In what language?


Java


> And IMHO the added code complexity is not worth the trouble.

The thing is, this is just that - your opinion, generalized as The Truth. But engineering is about making the right trade-offs. Often threading will be fine, you'll win simplicity, and all is good. But sometimes you really need the performance, or your field is crowded and its a competitive advantage. Think large-scale infrastructure at AWS, central load-balancers, or high-freq-trading.


It goes deeper than that. There is plenty of research showing that shared memory multithreading is not even a viable concurrency model. The premise that threads are fine and simple is just false.


I'm not sure what you mean. One of Rust's major research contributions is to show that shared memory multithreading is a perfectly viable concurrency model, as long as you enforce ownership discipline to statically eliminate data races.


> The thing is, this is just that - your opinion, generalized as The Truth.

Heh? Where?


In Python I use async instead of threads for reasons unrelated to performance. https://glyph.twistedmatrix.com/2014/02/unyielding.html


> The "what color is your function?" post fails to acknowledge that goroutines are just threads, which is one of my major issues with it.

"""Three more languages that don’t have this problem: Go, Lua, and Ruby.

Any guess what they have in common?

Threads. Or, more precisely: multiple independent callstacks that can be switched between. It isn’t strictly necessary for them to be operating system threads. Goroutines in Go, coroutines in Lua, and fibers in Ruby are perfectly adequate."""

What more do you need?


As an aside, Zig[0] recently merged a change to try and tackle the use case of [a]sync-agnostic functions: https://github.com/ziglang/zig/issues/1778. The "What Color is Your Function" blog post appears to have been one of the inspirations behind the change. Some of Andy's (the language creator) recent videos go over it in detail: https://www.youtube.com/channel/UCUICU6mgcyGy61pojwuWyHA

[0]: https://ziglang.org


That looks quite interesting!

The idea is that you can set the global "io_mode" mode to blocking, mixed or evented and I/O functions will switch their implementation accordingly. The type of the function will then, if I got that right, propagate up the call stack and turn functions that touch it transparently into either normal or async/awaitable funtions.

Nice way to avoid a bifurcation of the ecosystem into red/green functions. Its a bit magical maybe, any other trade-offs?


Please excuse my cynicism, but how can a global variable for switching between blocking and async be considered interesting?

I mean, don’t you know at compile-time whether you want something to be async or not? If so, it should be handled by the type system, not by mutating a variable at runtime.


I believe this is a compile time setting. The point is to avoid the red/blue function color issue, which has implications for all functions upstream of I/O functions. It literally is about the type system. Did you even read the linked text?


One trade-off at the moment (if I understood this correctly) is that function pointers lose this transparency, and you have to be explicit about them being async (or not). Maybe the plan is to lift this restriction in the future? I am not privy to that.


None of the 5 points in that article about callbacks in 2015 node.js apply to async in Rust. The Rust people spent years agonizing over their version of async and applied a lot of lessons learned from implementations in other languages.

https://news.ycombinator.com/item?id=20676641

It's trivial to turn async into sync in Rust. You can use ".poll", "executor::block_on", et cetera.

Turning sync into async is harder in any language. Even Go with it's easy threading. That's a good argument to make async the default in libraries in Rust, but since async isn't stable yet, that would have been hard to do 5 years ago.


Literally none of your rebuttals actual rebut the claims. The very fact that async-std exists is absolute proof that the issue remains--if there wasn't a "color" problem there wouldn't be any need for a "port" of the standard library.

Rust had legitimate reasons for taking the approach that they did. One can agree that they made the correct decision without excusing and obscuring the consequential costs.


The async-std is taking the “always red” approach that the article mentions and that wasn’t possible until now due to async being hot of the presses. The rest of the arguments in the article are based on the point that “red functions are more clumsy to call” which doesn’t hold for Rust, but holds for JavaScript.


The article explicitly admits that async/await is ergonomically much nicer than explicit futures/promises. But the color problem still remains, one consequence of which is duplication of code and interfaces.

Arguing that the problem doesn't exist if you only stick to functions of a single color isn't a rebuttal, it's an admission! But the fact of the matter is async functions have real limitations and costs, which is why they're not the default in Rust, which in turn is why any Rust program will always have some mix of differently colored functions. But, yeah, the fewer of one color and the more of the other color, the better. That's the point.


Once again, the article is about JavaScript and everything it says still holds today, introducing async/await didn't change anything. Sync functions can only call other sync functions and use the result immediately. To use an async function, you have to convert the caller to an async function too which can be anything from annoying to impossible.

So yes, Rust still has colors, but it doesn’t matter because a red function can call a blue one without a problem and vice versa. You’re right in saying that async functions have a cost and shouldn’t be used indiscriminately - so just use them when it makes sense. As opposed to JavaScript, Rust doesn’t make you commit to one or the other early and either face major refactors in the future or pay the price of async when it’s not required.

P.S. I think there are some caveats for library authors and also to blocking the thread on a single future, but maybe more qualified people can comment on those.


I think the point is that "colored" functions only existed because Rust did not previously have async support. Now that it has async support, new code can be one color: async, while maintaining ergonomics.

Maybe new code will be exclusively async and existing code will switch over.


Not all new code shouldn't be async. I write graphics code. There is no benefit to me, or any of my users, if all of my code is async. No system has 10,000 simultaneous GPUs to drive independently.


I agree with your general point, but I do want to point out (as I'm sure you're aware) there's plenty of asyncronous logic in graphics code.

Some (but not all) of which might even benifit from async... although graphics code has it's own solutions to many of these problems, and it certainly wouldn't be the bread and butter of your core render loop.

1) For performance reasons, your GPU consumes command buffers after a decent delay from when your CPU requests it. This means async logic crops up for screenshot/recording readbacks, visibility queries, etc. assuming you don't want to simply stall everything and tank your framerate.

2) New lower level graphics APIs expose the asyncronous logic of command submission more than ever before, limiting safe CPU access to memory based on what the GPU is still accessing. This sometimes spills into higher level APIs - e.g. bgfx buffer uploads can either take a reference to memory (fast) - which you must keep valid and unmodified for a frame or two (asyncronous, and currently difficult to expose a sound+safe API for to Rust) - or it can make an extra deep copy (perf hit) to pretend it's behaving in a more syncronous fashion.

3) Resource loading is heavily asyncronous. You don't want to stall a game out on blocking disk I/O for a missing minimap icon if you can just fade it in a few seconds later. I might not have 10,000 GPUs to drive, but I've certainly had 10,000 assets to load, semi-independently, often with minimal warning.


We can never eliminate sync stuff, because async requires a runtime. Async is great when you need it, but you don't always need it, and you shouldn't have to pay the cost if you don't plan on using it.


> but holds for JavaScript.

Held, in 2015 but doesn't any longer since js had async/await.

This blog post isn't really interesting anyways, and its popularity mainly comes from the zealotry of gophers.


Nope, it still holds. It’s in fact impossible to call an async function from a sync function and return the result. To use await you have to make the function async which means the caller needs to be async-aware and so on, all the way to the top of the stack.

There are hacks like “deasync”, but I personally wouldn’t use it.

https://github.com/abbr/deasync

Rust can block on an individual future so, say, a sync callback can still take advantage of async functions.


But you don't need `await` to call an async function, you can use a regular function call in a symc and then the function returns (synchronously) a Promise.

What cannot be done is to perform a blocking call on a Promise from a sync function. And that is by design because JavaScript has a single threaded runtime.


Given history of JS not being able to call an async function from sync function is a none issue. JS went from callbacks to promises to async/await (sugar on top of promises).


> that would have been hard to do 5 years ago.

Five years ago Rust still had green threads. Literally every standard library I/O function was async, and the awaits were always written for you with no effort.

Its literally taken five years to get back to an alpha thats not as good, and we'll still have to wait for a new ecosystem to built on top of it. I know not everyone writes socket servers and so forcing the old model on everyone probably doesn't make sense long-term, but I still have to shake my head at comments like this.

https://github.com/rust-lang/rfcs/pull/230


Green threads have no place in a low level systems language like Rust whose design goals are zero cost abstraction and trivial C interop.

D made a similar mistake by requiring GC/runtime from start and now even though they added ways to avoid it the ecosystem and the language design are "poisoned" by it an itmakeas it a very hard sell in some places where it could be sold as a C++ successor.

Because rust made the right choice in time it's now a contender in that space, if it chose to go down the runtime required/custom threading model route it would have much less practical appeal. If you can swallow runtime/threading abstraction overhead why not just bolt on a GC and use Go


Many systems have been developed in systems enabled GC languages.

C++11 introduced a GC API in the standard library, and one of the biggest C++ game engine does use GC in their engine objects, Unreal.

C++ on Windows makes heavy use of reference counting (which is a GC algorithm from CS point of view), via COM/UWP.

The biggest problem to overcome is religious, not technical.


>C++ on Windows makes heavy use of reference counting (which is a GC algorithm from CS point of view), via COM/UWP.

Not sure if Ref counting is a good example here, as there is no runtime monitoring the object graph hierarchy and of course Rust it’s self uses ref counting in many situations.


Chapter 5 of "The Garbage Collection Handbook", one of the GC gospel books.

Reference Counting is a garbage collection implementation algorithm from CS point of view.

RC has plenty of runtime costs as well, cache invalidation, lock contention on reference counters, stop the world in complex data structures, possible stack overflows if destructors are incorrectly written, memory fragmentation.


> Its literally taken five years to get back to an alpha thats not as good

The new I/O system is better in several ways. First, as you acknowledged, not everyone writes servers that need high scalability. M:N has no benefit for those users, and it severely complicates FFI. Second, async is faster than M:N because it compiles to a state machine: you don't have a bunch of big stacks around.


Yes, its better in several ways, but its also worse in several ways. It will take another five years to build a robust ecosystem for servers, and you'll still have to be careful not to import the wrong library or std module and accidentally block your scheduler. Plus the extra noise of .await? everywhere.

I'm not saying it was the wrong decision five years ago, but it definitely was a choice and there could have been a different one. I was responding to someone who said async wasn't an option five years ago.


M:N was slower than 1:1 in Rust. That's why it was removed. The problems you cite are problems of async/await, but they can be addressed by just using 1:1 threads.


I don't think M:N forces a stack. The stack no stack is called stackless coroutine vs stackful coroutine.

M:N is the parallelization level. I'm actually not sure if Rust is M:1 or M:N or both based on configuration.

M is the number of concurrent process in the language, basically the number of user thread. These user threads can be implemented to be stackful or stackless, up to the language. The N is the number of OS threads.

At least that's always been my understanding.


I do Rust since 2013. It did actually had two half-baked runtimes as a compile time mode.

It also was constantly crashing and had weird semantic issues. I very much prefer the current state, even if I'm a bit sad that async/await has taken us so long.


I used "5 years ago" as a code for "the first time I played with Rust". Obviously not 5 years ago, then. It's pretty amazing how far it's gone so quickly.


> It's trivial to turn async into sync in Rust. You can use ".poll", "executor::block_on", et cetera.

Is it 0-cost abstraction? I mean, is `sync_read` will compile to the same code like `async_read.poll`? Because turning sync into async is kind of trivial as well: just spawn new thread for that sync block.


0-cost abstraction was summarised by Stroustrup as:

> What you don’t use, you don’t pay for. And further: What you do use, you couldn’t hand code any better.

In that mind-set, it is completely okay that `sync_read` and `async_read.await` can totally compile to something different, as they abstract different things.

Boats has some more thoughts on this here: https://boats.gitlab.io/blog/post/zero-cost-abstractions/


That's a tricky question. My understanding is that while it's (in theory, modulo compiler bugs and features) a 0-cost abstraction over different underlying system APIs, those different underlying system APIs aren't necessarily the same cost. For example, if I'm trying to read from a socket in the synchronous world, I just issue the `read` system call. But in the async world I'm going to do quite a bit more:

- Create an epoll descriptor.

- Add my socket to that descriptor.

- Poll the descriptor for a readiness notification.

- Read the descriptor.

Those first three system calls weren't required in the synchronous version, and unless the read is large enough to overshadow them, they represent some additional cost. But that cost is required by the OS itself, not by Rust's abstractions.

Someone with more experience writing Mio code might want to jump in and correct me here though.


You're pretty much correct, there is a tradeoff here. There's reasons why many high-performance systems like databases are mixed systems.


Spawning a new thread for an operation is not async in the sense people typically mean. For an async IO library, you would expect it to be using async IO primitives like epoll, not just wrapping blocking operations in a thread.


That’s what I like about Go. You write sync code, but because Go routines aren’t OS threads they operate with the efficency of async code.


> You write sync code, but because Go routines aren’t OS threads they operate with the efficency of async code.

No, they don't. Goroutines have stacks, while Rust async code does not. Go has to start stacks small and copy and grow them dynamically because it doesn't statically know how deep your call stack is going to get, while async/await compiles to a state machine, which allows for up-front allocation. Furthermore, Go's M:N scheduling imposes significant costs in other places, such as FFI.

Besides, for the vast majority of apps, OS threads are not significantly different from goroutines in terms of efficiency. Rust doesn't have a GIL and per-thread startup time and memory usage are very low. It only starts to matter once you have a lot of threads—as in, tens of thousands of clients per second—and in that case it's mostly stack size that is the limiting factor.


> OS threads are not significantly different from goroutines in terms of efficiency

This is not true for a use case with a lot of connections, additionally context switch cost a lot more now with all side channel attack mitigations on.


People are happily running Rust servers in production using thousands of concurrent threads per second.


At the same time, many others do need more power; that’s why async/await is asked for so often.


I don't doubt that but apparently we have different definitions of a lot. Additionally latency and hardware also matters, I can say that C is doing n things a second while PHP is doing the same but C is running on EC2 micro instance and PHP is running on 2x Intel Xeon Platinum 9282 dedicated machine. C10K problem was not solved by 1:1 model and this is an old problem. C100K+ is what I see in some of production systems I work on.


OK, but, I mean, we've done this experiment, and we found that M:N in Rust was slower than 1:1.


Thousands isn't much.


You can scale up to tens of thousands of threads. But if that isn't enough for your application, then you can use async!

M:N threading was slower than 1:1 in Rust.


AFAIK, that's just for file I/O, which doesn't really work well with epoll.


Correct. It's also common practice.


If you're writing sync code the last thing you're thinking about is costs associated with these APIs.

Sync is a rudiment of our close past. We use it when we need to shave off development costs.


> Turning sync into async is harder in any language.

well in most languages you can wrap sync into async. so it's not "hard". it's just harder to have NON blocking code. i.e. in c# there is a difference between:

`await Task.Run(() => Thread.Sleep(5000));`

and

`await Task.Delay(5000);`

both will wait for 5 seconds but one will waste cpu cycles while the other won't.


> well in most languages you can wrap sync into async. so it's not "hard"

It is not easy to do in a cirrect and performant way. "async" doesn't mean "code that runs in another thread". You can have a single threaded runtime running async code (that's usually the case for javascript).

The "async-ness" is in those cases provided by the use of non-blocking primitives for IO, network etc. If a function is making a blocking call to the file system even if you make it async it will not help since the main thread will still be blocked on that system call.

The performance will also be quite different: waiting for data on 10000 sockets in a non-blocking way is quite different from having 10000 threads doing the same.


Including in rust


> Turning sync into async is harder in any language.

Elixir's Task module (in the stdlib):

    future = Task.async(fn ->
      do_something_here
    end)
    
    ...do_other_things...

    result = Task.await(future, timeout)
Mixing it with the Enum library makes concurrency dead-simple (got I a junior dev dispatching concurrent tasks in scripts with confidence), at the expense of an ugly nested double lambda.

    some_list_of_values
    |> Enum.map(fn value -> 
      Task(fn -> do_something_with(value) end) 
    end)
    |> Enum.map(&Task.await(&1, timeout))


Does Elixer overload IO operations to be async in async contexts? Because that is largely why you cannot just wrap sync code in an async block and call it a day - once it hits a system call the thread is paused but the scheduler cannot tell that it should be dequeued.

This is largely why Python async took so long to mature, because so much inbuilt functionality was making IO operations transparently using core sync impls that locked up any async executor.


I'm still relatively new to the erlang vm so some of the details here might be wrong, if someone wants to correct me, please it's welcomed.

Console IO operations are actually message call to a "global group leader" which performs the operation, so they are async (and atomic). This can sometimes be confusing if an operation (such as logging) has a bunch of middlemen with an IO operation as a side effect. It's worth the atomicity, though, so none of your IO calls are interrupted by another IO call. Also, if you run a command on a remote node which dispatches IO as part of its own process, the IO will be forwarded back to its group leader (which is on your local node), which is useful for introspecting into another VM.

Disk IO is also different; each open file descriptor effectively gets its own "thread" that you send IO messages to. There are ways to bind a file descriptor directly to your current "thread", but you "have to be more careful when you do that" - you do that if performance is more important (and I have done this, it's not terrible if you are careful).

Network IO is also different; the erlang VM kind of has its own network stack, if you will, but you can set up a socket to be its own "thread" or you can bind a socket into a thread so that network packets get turned into erlang messages.

Handling blocking is all done for you by the VM, which is preemptive and tries to give threads fair share of the VM time.

When people say that programming the erlang VM is like doing everything in its own os, they aren't kidding. Except unlike linux, where your communications are basically limited, you get to interact with your processes via structured data types with coherent language (and also IPC calls are way cheaper than OS processes).

> Does Elixer overload IO operations to be async in async contexts?

Maybe the right way to answer this is: When in Elixir, presume everything is async.


This incurs runtime overhead and boilerplate, so while it’s not as hard as other languages, it’s still harder.


like what, a few microseconds? What are you doing where you're awaiting for things in parallel where that matters? HPC? We're dispatching things that take on the order of minutes. Typically a local network request has 10-20 milliseconds of latency on our office LAN, so whatever. Clean and comprehensible code with very little boilerplate is more important when I'm reviewing my junior's code.


Well, thats a hard sell for Rust because it specifically advertises itself as a C++ replacement which means no overhead or runtime.


I think if you're striving for that, then a bit of complexity is warranted. Not everything has to be simple, and async is hard to do correctly without the correct abstractions. Honestly, though I was hoping Rust would go with the Actix way of doing things, but that's fine. You don't have to use Rust's async.


Elixir Tasks act closer to a very lightweight threadpool dispatch, rather than the coroutine style of async/await in other languages. An Elixir task doesn't, iirc, share memory with other tasks and won't block if you make it spin.

This makes it a hell of a lot easier to reason about.


There are ways to get around this, the way that I have done it in Nim is via a `multisync` macro:

    proc readLine(s: Socket | AsyncSocket): Future[string] {.multisync.} =
      while true:
        let c = await s.recv(1)
        case c
        of '\n':
          return
        else:
          result.add(c)
This is equivalent to defining two `readLine` procedures, one performing synchronous IO and accepting a `Socket` and another performing asynchronous IO and accepting an `AsyncSocket`. It works very well in practice.


This is why I'm curious about algebraic effects which was recently discussed on /r/rust [0]

The main challenges I see are around usability within the language design on how best to propagate and compose them.

[0] https://www.reddit.com/r/rust/comments/cjcwmu/is_there_inter...


I found out about algebraic effects a year or two ago when I ran across the efforts to bring them to OCaml. Async/await was a preliminary thing so I though algebraic effects would be a more complete solution and more in line with the Rust ethos. After asking around in some NYC meetups and on Reddit , my impression is that there isn't a lot of appetite to break additional new ground in terms of bringing fringe language features (I'm unaware of a non-academic language that features them in a production release, OCaml is closest AFAIK) into a language that already has a fairly high barrier to entry.


The point of asynchronous programming is to know exactly where concurrency happens in your code. This both eliminates concurrency bugs and gives you predictability for high performance.


Cooperative multitasking, like it's 1995 again?

No thanks.


Cooperative multitasking is great within a single application, it's when you don't have preemptive multitasking between applications that you have the problem seen in the early 90s on early MacOS and Win16.


Asynchronous programming is not cooperative multitasking.


They are very similar. In cooperative multitasking, programs yield the thread to the OS, while async programs yield to the event loop. In both cases, yielding is voluntary. Are there other differences? It's probably quite easy to turn a program written one way into the other.

Edit: I just remembered that in cooperative multitasking, it's probably possible for the OS to safely save the program stack pointer, meaning the program doesn't have to unwind its stack when yielding, unlike async programs. Never mind, that makes the two models quite different. However, in practice, programs written for cooperative multitasking really should be structured just like async programs in order to be responsive (so users can, for example, interact with the GUI while downloading files in the background.)


Even conceptually the models are very different, in one control is just given up and regained unpredictably, while in the other one it is programmed, hence asynchronous programming, not multitasking.


> in one control is just given up and regained unpredictably

Which one? It’s “cooperative” ie not unpredictable. The points where one can block are predictable and documented explicitly, otherwise how would the programmer know they won’t block forever. The same should hopefully be the case for async/awaitable apis.

In fact where async/await will actually give up control are harder to tease out.

The differences are really not as big as they would seem.


In cooperative multitasking you can program when to give up control, not when it is regained. The regaining part is unpredictable. Which introduces a lot of non-determinism to deal with and overhead.


This is no different than async/await. At some point you await a scheduled primitive, it could be a timer, io readiness, an io completion... and yield to a scheduler. You don’t specify explicitly when you return. These are not tightly coupled coroutines. This is precisely what is going on in cooperative multitasking.

I don’t see how this increases overhead to deal with either.

Basically, coop multitasking and async/await operate on the exact same execution framework, the latter just gives convenient syntactic support.

Perhaps you should see how typescript turns async await into js.


Await is just syntactic sugar. You do not really await anything. What actually happens is an event handler gets called on an event, where it sets up more event handlers for more events and so on. This is the essence of asynchronous programming. There are no tasks, no yielding, practically no overhead and everything is deterministic (in relation to external events obviously) [1]. The only cooperative multitasking implementations that have the same amount of determinism are those implemented strictly on top of event loops and that lack yielding function, so they cannot really be called cooperative multitasking implementations, as they can't "cooperate". All actual implementations have yielding, do not get control deterministically (dealing with that non-determinism requires stuff like semaphores) and have relatively significant overhead.

[1] If implemented with care, not doing syscalls in the middle of async primitives and using fast nearly-O(1) algorithms for timers, etc. it can be incredibly fast. And of course Rust also gives enough room to mess up all that nice determinism.


> You do not really await anything. What actually happens is an event handler gets called on an event,

So the event handler gets called immediately? No that’s not right. What would be the point of that? The event handler or continuation obviously needs to be scheduled on something that is awaitable. Meanwhile, other concurrent tasks may be able to run.

> This is the essence of asynchronous programming. There are no tasks, no yielding, practically no overhead and everything is deterministic

This is just totally wrong. Especially re tasks: https://docs.python.org/3/library/asyncio-task.html#creating...

There is nothing inherent about async and await that prevents “yielding”... the issue of yielding and semaphores is a concurrency issue and since async and await are used in concurrent programming environments, the same issues apply.

While it is true async and await don’t require any kind of cooperative concurrent framework to work, that is kind of their whole point for existing. A single task async/await system isn’t terribly interesting.


> So the event handler gets called immediately? No that’s not right. What would be the point of that? The event handler or continuation obviously needs to be scheduled on something that is awaitable. Meanwhile, other concurrent tasks may be able to run.

It's kind of like this: async/await is syntactic sugar for higher-order abstractions around event loops. At the level of event loops and event hadnlers there is no awaiting anymore. And the whole point of event loops is to not run event handlers concurrently, that's why they are even called loops, they invoke handlers one by one in a loop deterministically without concurrent tasks and once there is nothing more to run they just block and wait for new events. Obviously you can run multiple event loops in parallel, but you shouldn't share memory between them, as it defeats the purpose, is always slower and is never really necessary, you can just use asynchronous message passing to communicate between event loops when you have to.

> A single task async/await system isn’t terribly interesting.

And yet this is the whole point of async/await, promises, futures and event loops. All of them exist to avoid mistakes and performance problems of shared memory concurrency. I mean, really, if you have semaphores or mutexes in event handlers, futures, promises or async functions - you are in a broken concurrency model zone.


It is regained in exactly the same cases it would be in the async model: when a blocking operation completes and the scheduler resumes the now ready thread. As scheduler is called executor in the async world, while a thread is a coroutines, but the concepts are very similar.


It is cooperative, so no control is relinquished predictably. The difference is the syntactic limitations of the current async model prevent building abstractions.


How is it not?


Tell that to my infinite loop.


Would you mind elaborating on your opinion here?

As far as I understand, cooperative is far more efficient than preemptive, but unsuitable for poorly written or untrusted code.

I wish to learn and would really appreciate your assistance if you are willing to help.


The key difference is cooperative multitasking lets the program yield the thread anywhere, not just to the event loop like async programming. Arbitrary yielding was a feature that programmers widely abused in the early Windows days. The user would start something in an app that takes some time to complete; the app would freeze for a while, but all other apps remained usable. It was obvious that the programmers, rather than solving the real problem, had sprinkled some yield instructions throughout the program, which allowed the computer to keep working even though the app was unresponsive. It's a good thing that async programming frameworks don't usually allow yielding from arbitrary places.


>It's a good thing that async programming frameworks don't usually allow yielding from arbitrary places.

Well..

    await new Promise((res, rej) => { setImmediate(res); })
(In environments without `setImmediate` this is easily shimmed - https://github.com/YuzuJS/setImmediate)


True. That's the new kind of yield that requires language support and it's only available in async functions. I was referring to what happens when framework or language designers try to allow something like the await keyword in non-async functions; it turns into an epic mess. I know because I tried (as a thought experiment.) :-)


Why not?


The caller of the function knows nothing about what happens within the body of the function. (Is it just doing computation, or is it doing I/O?). The async keyword is how the author of the function makes it explicit that caller should choose when to await the result.

Isn't the alternative WCiYF is proposing to allow the caller to treat any function asynchronously, while having no way to discern whether doing so might be counterproductive?


Rust async functions are also different here. They don't run the code, they create a Future structure in it's initial state. Contrary to e.g. JavaScript, it doesn't start to run until you end up putting it on an executor. So the actual function call does something very different from what happens in other languages.

It's sometimes called "cold futures".



It's similar in some ways, yes. As always, details and labels differ.


The right way to handle this stuff is polymorphism. But with Rust lacking higher-kinded types I guess that's not possible. A decent test of whether their "associated type constructors" actually solve the same problems, as sometimes claimed, would be whether you can write this kind of async-polymorphic code with them.


> I thought Rust had other, better ways to create non-blocking code so I don't understand why to use async instead.

'async' exists because Python has that GIL bullshit and so Python programmers had to invent that fifth wheel of 'async programming'.

Programmers in other languages then got jealous because they, too, wanted a complex, unnecessary framework that pollutes the whole runtime and serves to differentiate regular programmers from 'rockstar' programmers.

And so async got fashionable and barely-literate coders now think async is magic performance dust that will automatically make your program run 1000% faster.

TL;DR - it's just fashion, give it five years and we'll be reading posts about how async sucks and that it's stupid legacy tech invented by bonehead dinosaurs.


Pretty sure the async/await support in C# predates Python.

C# introduced it in 5.0, which came out in August 2012. The Python proposal (PEP 3156) for an async library was posted in 2012, the proposal (PEP 492) for async/await syntax in 2015, and implemented in Python 3.4 and 3.5 respectively, I believe. So C# predates Python by about 3 years.

From what I can gather, Python was influenced by C#. But C# doesn't have a global lock, and that's not why it has async/await.

Edit: Added PEP reference.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: