Hacker News new | past | comments | ask | show | jobs | submit login

This is nothing to do with async Rust; monoio (and possibly other io-uring libraries) are just exposing a flawed API. My ringbahn library written in 2019 correctly handled this case by having a dropped accept future register a cancellation callback to be executed when the accept completes.

https://github.com/ringbahn/ringbahn






Doesn't this close the incoming connection, rather than allowing another pending accept to receive it?

You're right. Looking at my actual code, instead I stored the accept to be yielded next time you call accept and only cancel an accept call if you drop the entire listener object mid-accept.

The solution proposed in this post doesn't work, though: if the accept completes before the SQE for the cancellation is submitted, the FD will still be leaked. io-uring's async cancellation mechanism is just an optimization opportunity and doesn't synchronize anything, so it can't be relied on for correctness here. My library could have submitted a cancellation when the future drops as such an optimization, but couldn't have relied on it to ensure the accept does not complete.


> You're right. Looking at my actual code, instead I stored the accept to be yielded next time you call accept and only cancel an accept call if you drop the entire listener object mid-accept.

This is still a suboptimal solution as you've accepted a connection, informing the client side of this, and then killed it rather than never accepting it in the first place. (Worth noting that linux (presumably as an optimisation) accepts connections before you call accept anyway so maybe this entire point is moot and we just have to live with this weird behaviour.)

Now it's true that "never accepting it in the first place" might not be possible with io_uring in some cases but rather than hiding that under drop the code, it should be up front about it and prevent dropping (not currently possible in rust) in a situation where there might be uncompleted in-flight requests before you've explicitly made a decision between "oh okay then, let's handle this one last request" and "I don't care, just hang up".


If you want the language to encode a liveness guarantee that you do something meaningful in response to an accept rather than just accept and close you do need linear types. I don't know any mainstream language that encodes that guarantee in its type system, whatever IO mechanism it uses.

This all feels like the abstraction level is wrong. If I think of a server as doing various tasks, one of which is to periodically pull an accepted connection off the listening socket, and I cancel that task, then, sure, the results are awkward at best and possibly wrong.

But I’ve written TCP servers and little frameworks, asynchronously, and this whole model seems wrong. There’s a listening socket, a piece of code that accepts connections, and a backpressure mechanism, and that entire thing operates as a unit. There is no cancellable entity that accepts sockets but doesn’t also own the listening socket.

Or one can look at this another way: after all the abstractions and libraries are peeled back, the example in the OP is setting a timeout and canceling an accept when the timeout fires. That’s rather bizarre — surely the actual desired behavior is to keep listening (and accepting when appropriate) and do to the other timed work concurrently.

It just so happens that, at the syscall level, a nonblocking (polled, selected, epolled, or even just called at intervals) accept that hasn’t completed is a no-op, so canceling it doesn’t do anything, and the example code works. But it would fail in a threaded, blocking model, it would fail in an inetd-like design, and it fails with io_uring. And I really have trouble seeing linear types as the solution — the whole structure is IMO wrong.

(Okay, maybe a more correct structure would have you “await connection_available()” and then “pop a connection”, and “pop a connection” would not be async. And maybe a linear type system would prevent one from being daft, successfully popping a connection, and then dropping it by accident.)


> maybe a more correct structure would have you “await connection_available()” and then “pop a connection”

This is the age-old distinction between a proactor and reactor async design. You can normally implement one abstraction of top of the other, but the conversion is sometimes leaky. It happens that the underlying OS "accept" facility is reactive and it doesn't map well to a pure async accept.


I’m not sure I agree. accept() pops from a queue. You can wait—and-pop or you can pop-or-fail. I guess the former fits in a proactor model and the latter fits in a reactor model, but I think that distinction misses the point a bit. Accepting sockets works fine in either model.

It breaks down in a context where you do an accept that can be canceled and you don’t handle it intelligently. In a system where cancellation is synchronous enough that values won’t just disappear into oblivion, one could arrange for a canceled accept that succeeded to put the accepted socket on a queue associated with the listening socket, fine. But, in general, the operation “wait for a new connection and irreversibly claim it as mine IMO just shouldn’t be done in a cancellable context, regardless of whether it’s a “reactor” or a “proactor”. The whole “select and, as one option, irrevocably claim a new connection” code path in the OP seems suspect to me, and the fact that it seems to work under epoll doesn’t really redeem it in my book.


This is a simple problem I have met and dealt with before.

The issue is the lack of synchronization between cancellation and not handling cancel failure.

All cancellations can fail because there is always a race when calling cancel() where the operation completes.

You have two options, synchronous cancel (block until we know if cancel succeded) or async cancel (callback or other notification).

This code simply handles the race incorrectly, no need to think too hard about this.

It may be that some io_uring operations cannot be cancelled, that is a linux limitation. I've also seen there is no async way to close sockets, which is another issue.


> You have two options, synchronous cancel (block until we know if cancel succeded) or async cancel (callback or other notification).

> This code simply handles the race incorrectly, no need to think too hard about this.

I still think the race is unnecessary. In the problematic code, there’s an operation (await accept) that needs special handling if it’s canceled. A linear type system would notice the lack of special handling and complain. But I would still solve it differently: make the sensitive operation impossible to cancel. “await accept()” can be canceled. Plain “accept” cannot. And there is no reason at all that this operation needs to be asynchronous or blocking!

(Even in Rust’s type system, one can build an “await ready_to_accept()” such that a subsequent accept is guaranteed to succeed, without races, by having ready_to_accept return a struct that implements Drop by putting the accepted socket back in the queue for someone else to accept. Or you can accept the race where you think you’re ready to accept but a different thread beat you to it and you don’t succeed.)


TCP connections aren’t correct representations of the liveness of sessions. The incorrectness is acute when it’s mobile browsers connecting over LTE to load balanced web servers. That’s why everyone reinvents a session idea on top of the network.

> Worth noting that linux (presumably as an optimisation) accepts connections before you call accept anyway so maybe this entire point is moot and we just have to live with this weird behaviour.

listen(2) takes a backlog parameter that is the number of queued (which I think it means ack'd) but not yet popped (i.e. listen'd) connections.


And if you pass 0 it pre-acks one connection before you accept (which is what I was referring to).

> if the accept completes before the SQE for the cancellation is submitted, the FD will still be leaked.

If the accept completes before the cancel SQE is submitted, the cancel operation will fail and the runtime will have a chance to poll the CQE in place and close the fd.


Hmm, because the cancel CQE will have a reference to the CQE it was supposed to cancel? Yes, that could work.

The rest of this blog discusses how to continue processing operations after cancellation fails, which is blocked by the Rust abstraction. Yes, not everyone (probably very few) defines this as a safety issue, I wrote about this at the end of the blog.

I don't consider Yosh Wuyts's concept of "halt safety" coherent, meaningful or worth engaging with. It's true that linear types would enable the encoding of additional liveness guarantees that Rust's type system as it exists cannot encode, but this doesn't have anything to do with broken io-uring libraries leaking resources.

Continuing process after cancellation failure is a challenge I face in my actual work, and I agree that "halt-safety" lacks definition and context. I have also learned and been inspired a lot from your blogs, I appreciate it.

Agree. When I hear “I wish Rust was Haskell” I assume the speaker is engaged in fantasy, not in engineering. The kernel is written in C and seems to be able to manage just fine. Problem is not Rust. Problem is wishing Rust was Haskell.

Well, it's "about" async Rust and io-uring inasmuch as they represent incompatible paradigms.

Rust assumes as part of its model that "state only changes when polled". Which is to say, it's not really "async" at all (none of these libraries are), it's just a framework for suspending in-progress work until it's ready. But "it's ready" is still a synchronous operation.

But io-uring is actually async. Your process memory state is being changed by the kernel at moments that have nothing to do with the instruction being executed by the Rust code.


You are completely incorrect. You're responding to a comment in which I link to a library which handles this correctly, how could you persist in asserting that they are incompatible paradigms? This is the kind of hacker news comment that really frustrates me, it's like you don't care if you are right or wrong.

Rust does not assume that state changes only when polled. Consider a channel primitive. When a message is put into a channel at the send end, the state of that channel changes; the task waiting to receive on that channel is awoken and finds the state already changed when it is polled. io-uring is really no different here.


What you're describing is a synchronous process, though! ("When a message is put..."). That's the disconnect in the linked article. Two different concepts of asynchrony: one has to do with multiple contexts changing state without warning, the other (what you describe) is about suspending threads contexts "until" something happens.

Again you are wrong. A forum full of people who just like to hear themselves talk. I guess it makes you feel good in some way?

With io-uring the kernel writes CQEs into a ring buffer in shared memory and the user program reads them: its literally just a bounded channel, the same atomic synchronizations, the same algorithm. There is no difference whatsoever.

The io-uring library is responsible for reading CQEs from that ring buffer and then dispatching them to the task that submitted the SQE they correspond to. If that task has cancelled its interest in this syscall, they should instead clean up the resources owned by that CQE. According to this blog post, monoio fails to do so. That's all that's happening here.


> If that task has cancelled its interest in this syscall, they should instead clean up the resources owned by that CQE.

So, first: how is that not consistent with the contention that the bug is due to a collision in the meaning of "asynchronous"? You're describing, once more, a synchronous operation ("when ... cancel") on a data structure that doesn't support that ("the kernel writes ..." on its own schedule).

And second: the English language text of your solution has race conditions. How do you prevent reading from the buffer after the beginning of "cancel" and before the "dispatch"? You need some locking in there, which you don't in general async code. Ergo it's a paradigm clash. Developers, you among them it seems, don't really understand the requirements of a truly async process and get confused trying to shoehorn it into a "callbacks with context switch" framework like rust async.


> Developers, you among them it seems, don't really understand the requirements of a truly async process and get confused trying to shoehorn it into a "callbacks with context switch" framework like rust async.

This is an odd thing to say about someone who has written a correct solution to the problem which triggered this discussion.

Also, you really need to define what truly async means. Many layers of computing are async or not async depending on how you look at them.


Saw this show up after the fact. Maybe it's safe enough for me to try to re-engage: The point I was trying to make, to deafening jeering, is that the linked bug is a really very routine race conditions that is "obvious" to people like me coming from a systems programming background who deal with parallelism concerns all the time. It looks interesting and weird in the context of an async API precisely because async APIs work to hide this kind of detail (in this case, the fact that the events being added to the queue are in a parallel context and racing with the seemingly-atomic "cancel" operation).

APIs to deal with things like io-uring (or DMA device drivers, or shared memory media streams, etc...) tend necessarily to involve explicit locking all the way up at the top of the API to make the relationship explicit. Async can't do that, because there's nowhere to put the lock (it only understands "events"), and so you need to synthesize it (maybe by blocking the cancelling thread until the queue drains), which is complicated and error prone.

This isn't unsolvable. But it absolutely is a paradigm collision, and something I think people would be better served to treat seriously instead of calling others names on the internet.


Hi, I’m also from a systems programming background.

I’m not sure what your level of experience with Rust’s async model is, but an important thing to note is that work is split between an executor and the Future itself. Executors are not “special” in any way. In fact, the Rust standard library doesn’t even provide an executor.

Futures in Rust rely on their executors to do anything nontrivial. That includes the actual interaction with the io-uring api in this case.

A properly implemented executor really should handle cases where a Future decides to cancel its interest in an event.

Executors are themselves not implemented with async code [0]. So I’m not quite able to understand your claim of a paradigm mismatch.

[0]: subexecutors like FuturesUnordered notwithstanding.


https://news.ycombinator.com/item?id=41996976

See this comment describing how an executor can properly handle cancelations.


[flagged]


I think we just have to end this, your tone is just out of control and you're doing the "assume bad faith" trick really badly. But to pick out some bits where I genuinely think you're getting confused:

> Rust has ample facilities for preventing you from reading from the buffer after cancellation

The linked bug is a race condition. It's not about "after" and if you try to reason about it like that you'll just recapitulate the mistakes. And yes, rust has facilities to prevent race conditions, but they're synchronization tools and not part of async, and lots of developers (ahem) seem not to understand the requirements.


Again you’re just wrong.

Based on this post, when you drop a monoio TcpListener nothing happens. If there is an accept inflight, when it completes the reactor wakes your task, which ignores the wake up and goes back to sleep. INSTEAD when you drop the TcpListener it should cancel interest in this event with the reactor, and when the event completes the reactor should clean up the state for the complete event (which means closing the newly open file descriptor in this case).

Does this involve synchronization? Yes! Surprise surprise, when you share state between concurrent processes (whether they be tasks, threads, processes, or userspace and the kernel) you need some form of synchronization. When you say things like “Rust’s facilities to prevent race conditions [are] synchronizations tools and not part of async” you are speaking nonsense, because async Rust in all its forms are built on these synchronization primitives, whether they be atomic variables or system mutex’s or what have you.


Unbelievable! how bloody rude can you be?

To the moderators (dang), do people get to keep their account here just because they're a "famous" poster despite writing the way they're doing all over this post? I'm assuming other posters have been banned for substantially less aggressive behaviour...


> Again you are wrong. A forum full of people who just like to hear themselves talk. I guess it makes you feel good in some way?

I think you're being unduly harsh here. There are a variety of voices here, of various levels of expertise. If someone says something you think is incorrect but it seems that they are speaking in good faith then the best way to handle the situation is to politely provide a correct explanation.

If you really think they are in bad faith then calmly call them out on it and leave the conversation.


I've been following withoutboats for ~6 years and it really feels like his patience has completely evaporated. I get it though, he has been really in the weeds of Rust's async implementation and has argued endlessly with those who don't like the tradeoffs but only have a surface level understanding of the problem.

I think I've read this exact convo maybe 20+ times among HN, Reddit, Github Issues and Twitter among various topics including but not limited to, async i/o, Pin, and cancellation.


I freely admit I’m frustrated by the discourse around async Rust! I’m also very frustrated because I feel I was iced out of the project for petty reasons to do with whom I’m friends with and the people who were supposed to take over my work have done a very poor job, hence the failure to ship much of value to users. What we shipped in 2019 was an MVP that was intended to be followed by several improvements in quick succession, which the Rust project is only now moving toward delivering. I’ve written about this extensively.

My opinion is that async Rust is an incredible achievement, primarily not mine (among the people who deserve more credit than me are Alex Crichton, Carl Lerche, and Aaron Turon). My only really significant contributions were making it safe to use references in an async function and documenting how to interface with completion based APIs like io-uring correctly. So it is very frustrating to see the discourse focused on inaccurate statements about async Rust which I believe is the best system for async IO in any language and which just needs to be finished.


> So it is very frustrating to see the discourse focused on inaccurate statements about async Rust

> No, ajross is very confidently making false descriptions of how async Rust and io-using operate. This website favors people who sound right whether or not they are, because most readers are not well informed but have a ridiculous confidence that they can infer what is true based on the tone and language used by a commenter. I find this deplorable and think this website is a big part of why discourse around computer science is so ignorant, and I respond accordingly when someone confronts me with comments like this.

They had an inaccurate (from your point of view) understanding. That's all. If they were wrong that's not a reason to attack them. If you think they were over-confident (personally I don't) that's still not a reason to attack them.

Again, I think ajross set out their understanding in a clear and polite manner. You should correct them in a similar manner.


> has argued endlessly with those who don't like the tradeoffs but only have a surface level understanding of the problem

But that's really not what's going on here.

ajross has an understanding of the fundamentals of async that is different to withoutboats'. ajross is setting this out in a clear and polite way that seems to be totally in good faith.

withoutboats is responding in an extremely rude and insulting manner. Regardless of whether they are right or not (and given their background they probably are), they are absolutely in the wrong to adopt this tone.


>ajross has an understanding of the fundamentals of async that is different to withoutboats'.

ajross has an understanding of the fundamentals of async, but a surface level understanding of io-uring and Rust async. It's 100% what is going on, and again, it something I've seen play out 100s of times.

>Rust assumes as part of its model that "state only changes when polled".

This is fundamentally wrong. If you have a surface level understanding of how the Rust state-machine works, you could make this inference, but it's wrong. This premise is wrong, so ajross' mental model is flawed - and withoutboats is at a loss of trying to educate people who get the basic facts wrong and has defaulted to curt expression. And I get it - you see it a lot with academic types when someone with a wikipedia overview of a subject tries to "debate". You either have to do an impromptu of 101 level material that is freely available or you just say "you're wrong". Neither tends to work.

I'm not saying I condone withoutboats' tone, but my comment is really just a funny anecdote because withoutboats engages in this often and I've seen his tone shift from the "try to educate" to the "you're just wrong" over the past 6 years.


No, ajross is very confidently making false descriptions of how async Rust and io-using operate. This website favors people who sound right whether or not they are, because most readers are not well informed but have a ridiculous confidence that they can infer what is true based on the tone and language used by a commenter. I find this deplorable and think this website is a big part of why discourse around computer science is so ignorant, and I respond accordingly when someone confronts me with comments like this.

Still no reason for unprovoked personal attacks.

Stick to technical arguments.


Alternatively there's a problem with being "really in the weeds" of any problem in that you fail to poke your head up to understand other paradigms and how they interact.

I live in very different weeds, and I read the linked article and went "Oh, yeah, duh, it's racing on the io-uring buffer". And tried to explain that as a paradigm collision (because it is). And I guess that tries the patience of people who think hard about async[1] but never about concurrency and parallelism.

[1] A name that drives systems geeks like me bananas because everything in an async programming solution IS SYNCHRONOUS in the way we understand the word!


> Again you are wrong. A forum full of people who just like to hear themselves talk. I guess it makes you feel good in some way?

This is why people don't like the Rust community.


> Rust does not assume that state changes only when polled.

I will replace to more exact description about this, thanks.


the post only talks about "future state", maybe I'm not clearly to point out this. with epoll, accept syscall and future state changing is happened in the same polling, which io_uring is not. Once accept syscall is complete, future has already advanced to complete, but actually it is not at that moment in the real world Rust.

It's true, there's a necessary layer of abstraction with io-uring that doesn't exist with epoll.

With epoll, the reactor just maps FDs to Wakers, and then wakes whatever Waker is waiting on that FD. Then that task does the syscall.

With io-uring, instead the reactor is reading completion events from a queue. It processes those events, sets some state, and then wakes those tasks. Those tasks find the result of the syscall in that state that the reactor set.

This is the difference between readiness (epoll) and completion (io-uring): with readiness the task wakes when the syscall is ready to be performed without blocking, with completion the task wakes when the syscall is already complete.

When a task loses interest in an event in epoll, all that happens is it gets "spuriously awoken," so it sees there's nothing for it to do and goes back to sleep. With io-uring, the reactor needs to do more: when a task has lost interest in an incomplete event, that task needs to set the reactor into a state where instead of waking it, it will clean up the resources owned by the completion event. In the case of accept, this means closing that FD. According to your post, monoio fails to do this, and just spuriously wakes up the task, leaking the resource.

The only way this relates to Rust's async model is that all futures in Rust are cancellable, so the reactor needs to handle the possibility that interest in a syscall is cancelled or the reactor is incorrect. But its completely possible to implement an io-uring reactor correctly under Rust's async model, this is just a requirement to do so.


> But its completely possible to implement an io-uring reactor correctly under Rust's async model, this is just a requirement to do so.

I don't get why people say it's incompatible with rust when rust async libraries work IOCP, which follows the similar model as io-uring?


To be fair, I’m not sure if there exists any zero cost IOCP library.

The main way people use IOCP is via mio via tokio. To make IOCP present a readiness interface mio introduces a data copy. This is because tokio/mio assume you’re deploying to Linux and only developing on windows and so optimize performance for epoll. So it’s reasonable to wonder if a completion based interface can be zero cost.

But the answer is that it can be zero cost, and we’ve known that for half a decade. It requires different APIs from readiness based interfaces, but it’s completely possible without introducing the copy using either a “pass ownership of the buffer” model or “buffered IO” model.

Either way, this is unrelated to the issue this blog post identifies, which is just that some io-uring libraries handle cancellation incorrectly.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: