Hacker News new | past | comments | ask | show | jobs | submit login

As someone else said, it is not, strictly speaking, a bug. If your server receives a request that requires very computationally expensive work, is it okay to delay every other request on that core? That's probably not okay, and it'll show in your latency distribution.

Folks would rather have every future time sliced so that other tasks get some CPU time in a ~fair way (after all, there is no concept of task priority in most runtime).

But you're right: it isn't required, and you could sprinkle every loop of your code with yielding statements. But knowing when to yield is impossible for a future. If nothing else is running, it shouldn't yield. If many things are running but the problem space of the future is small, it probably shouldn't yield either, etc.

You simply do not have the necessary information in your future to make an informed decision. You need some global entity to keep track of everything and either yield for you or tell you when you should yield. Tokio does the former, Glommio does the latter.

It gets even more complex when you add IO into the mix because you need to submit IO requests in a way that saturates the network/nvme drives/whatever. So if a future submits an IO request, it's probably advantageous to yield immediately afterward so that other futures may do so as well. That's how you maximize throughput. But as I said, that's a very hard problem to solve.




Trying to solve the problem by frequently invoking signal handlers will also show in your latency distribution!

I guess if someone wants to use futures as if they were goroutines then it's not a bug, but this sort of presupposes that an opinionated runtime is already shooting signals at itself. Fundamentally the language gives you a primitive for switching execution between one context and another, and the premise of the program is probably that execution will switch back pretty quickly from work related to any single task.

I read the blog about this situation at https://tokio.rs/blog/2020-04-preemption which is equally baffling. The described problem cannot even happen in the "runtime" I'm currently using because io_uring won't just completely stop responding to other kinds of sqe's and only give you responses to a multishot accept when a lot of connections are coming in. I strongly suspect equivalent results are achievable with epoll.


>Trying to solve the problem by frequently invoking signal handlers will also show in your latency distribution!

So just like any other kind of scheduling? "Frequently" is also very subjective, and there are tradeoffs between throughput, latency, and especially tail latency. You can improve throughput and minimum latency by never preempting tasks, but it's bad for average, median, and tail latency when longer tasks starve others, otherwise SCHED_FIFO would be the default for Linux.

>I read the blog about this situation at https://tokio.rs/blog/2020-04-preemption which is equally baffling

You've misunderstood the problem somehow. There is definitely nothing about tokio (which uses epoll on Linux and can use io_uring) not responding in there. io_uring and epoll have nothing to do with it and can't avoid the problem: the problem is with code that can make progress and doesn't need to poll for anything. The problem isn't unique to Rust either, and it's going to exist in any cooperative multitasking system: if you rely on tasks to yield by themselves, some won't.


> So just like any other kind of scheduling?

Yes. Industries that care about latency take some pains to avoid this as well, of course.

> io_uring and epoll have nothing to do with it and can't avoid the problem: the problem is with code that can make progress and doesn't need to poll for anything.

They totally can though? If I write the exact same code that is called out as problematic in the post, my non-preemptive runtime will run a variety of tasks while non-preemptive tokio is claimed to run only one. This is because my `accept` method would either submit an "accept sqe" to io_uring and swap to the runtime or do nothing and swap to the runtime (in the case of a multishot accept). Then the runtime would continue processing all cqes in order received, not *only* the `accept` cqes. The tokio `accept` method and event loop could also avoid starving other tasks if the `accept` method was guaranteed to poll at least some portion of the time and all ready handlers from one poll were guaranteed to be called before polling again.

This sort of design solves the problem for any case of "My task that is performing I/O through my runtime is starving my other tasks." The remaining tasks that can starve other tasks are those that perform I/O by bypassing the runtime and those that spend a long time performing computations with no I/O. The former thing sounds like self-sabotage by the user, but unfortunately the latter thing probably requires the user to spend some effort on designing their program.

> The problem isn't unique to Rust either, and it's going to exist in any cooperative multitasking system: if you rely on tasks to yield by themselves, some won't.

If we leave the obvious defects in our software, we will continue running software with obvious defects in it, yes.


>This sort of design solves the problem for any case of "My task that is performing I/O through my runtime is starving my other tasks."

Yeah, there's your misunderstanding, you've got it backwards. The problem being described occurs when I/O isn't happening because it isn't needed, there isn't a problem when I/O does need to happen.

Think of buffered reading of a file, maybe a small one that fully fits into the buffer, and reading it one byte at a time. Reading the first byte will block and go through epoll/io_uring/kqueue to fill the buffer and other tasks can run, but subsequent calls won't and they can return immediately without ever needing to touch the poller. Or maybe it's waiting on a channel in a loop, but the producer of that channel pushed more content onto it before the consumer was done so no blocking is needed.

You can solve this by never writing tasks that can take "a lot" of time, or "continue", whatever that means, but that's pretty inefficient in its own right. If my theoretical file reading task is explicitly yielding to the runtime on every byte by calling yield(), it is going to be very slow. You're not going to go through io_uring for every single byte of a file individually when running "while next_byte = async_read_next_byte(file) {}" code in any language if you have heap memory available to buffer it.


Reading from a socket, as in the linked post, is an example of not performing I/O? I'm not familiar with tokio so I did not know that it maintained buffers in userspace and filled them before the user called read(), but this is unimportant, it could still have read() yield and return the contents of the buffer.

I assumed that users would issue reads of like megabytes at a time and usually receive less. Does the example of reading from a socket in the blog post presuppose a gigabyte-sized buffer? It sounds like a bigger problem with the program is the per-connection memory overhead in that case.

The proposal is obviously not to yield 1 million times before returning a 1 meg buffer or to call read(2) passing a buffer length of 1, is this trolling? The proposal is also not some imaginary pie-in-the-sky idea; it's currently trading millions of dollars of derivatives daily on a single thread.


You're confusing IO not happening because it's not needed with IO never happening. Just because a method can perform IO doesn't mean it actually does every time you call it. If I call async_read(N) for the next N bytes, that isn't necessarily going to touch the IO driver. If your task can make progress without polling, it doesn't need to poll.

>I'm not familiar with tokio so I did not know that it maintained buffers in userspace

Most async runtimes are going to do buffering on some level, for efficiency if nothing else. It's not strictly required but you've had an unusual experience if you've never seen buffering.

>filled them before the user called read()

Where did you get this idea? Since you seem to be quick to accuse others of it, this does seem like trolling. At the very least it's completely out of nowhere.

>it could still have read() yield and return the contents of the buffer.

If I call a read_one_byte, read_line, or read(N) method and it returns past the end of the requested content that would be a problem.

>I assumed that users would issue reads of like megabytes at a time and usually receive less.

Reading from a channel is the other easy example, if files were hard to follow. The channel read might implemented as a quick atomic check to see if something is available and consume it, only yielding to the runtime if it needs to wait. If a producer on the other end is producing things faster than the consumer can consume them, the consuming task will never yield. You can implement a channel read method that always yields, but again, that'd be slow.

>The proposal is obviously not to yield 1 million times before returning a 1 meg buffer, is this trolling

No, giving a illustrative example is not trolling, even if I kept the numbers simple to make it easy to follow. But your flailing about with the idea of requiring gigabyte sized buffers probably is.


> You're confusing IO not happening because it's not needed with IO never happening. Just because a method can perform IO doesn't mean it actually does every time you call it. If I call async_read(N) for the next N bytes, that isn't necessarily going to touch the IO driver.

Maybe you can read the linked post again? The problem in the example in the post is that data keeps coming from the network. If you were to strace the program, you would see it calling read(2) repeatedly. The runtime chooses to starve all other tasks as long as these reads return more than 0 bytes. This is obviously not the only option available.

I apologize for charitably assuming that you were correct in the rest of my reply and attempting to fill in the necessary circumstances which would have made you correct


Actually, no, I misread it trying to make sense of what you were posting so this post is edited.

This is just mundane non-blocking sockets. If the socket never needs to block, it won't yield. Why go through epoll/uring unless it returns EWOULDBLOCK?


For io_uring all the reads go through io_uring and generally don't send back a result until some data is ready. So you'll receive a single stream of syscall results in which the results for all fds are interleaved, and you won't even be able to write code that has one task doing I/O starving other tasks. For epoll, polling the epoll instance is how you get notified of the readiness for all the other fds too. But the important thing isn't to poll the socket that you know is ready, it's to yield to runtime at all, so that other tasks can be resumed. Amusingly upon reading the rest of the blog post I discovered that this is exactly what tokio does. It just always yields after a certain number of operations that could yield. It doesn't implement preemption.


Honestly I assumed you had read the article and were just confused about how tokio was pretending to have preemption. Now you reveal you hadn't read the article so now I'm confused about you in general, it seems like a waste of time. But I'm glad you're at least on the same page now, about how checking if something is ready and yielding to the runtime are separate things.


You're in a reply chain that began with another user claiming that tokio implements preemption by shooting signals at itself.

> But I'm glad you're at least on the same page now, about how checking if something is ready and yielding to the runtime are separate things.

I haven't ever said otherwise?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: