Hacker News new | past | comments | ask | show | jobs | submit login

Point taken. What about this pattern (pseudo code, obviously it would require e.g. adding some code for tracking how much data there is in the buffer or breaking the loop on EOF, but it illustrates the point):

   mut buffer: &[u8] = ...;
   loop {  
     select! {
       _ = stream.readable() => stream.read(&mut buffer),
       _ = stream.writable() => stream.write(&mut buffer),
     }
   }



One you add enough tracking meta data to to know how much there is in the buffer, you literally have implemented an SPSC queue.


Well, not really, because async/await guarantees I don't have to deal with the problem of producer adding data at the same time as consumer is removing the data in this case. In a proper SPSC queue some degree of synchronization is needed.


You stop adding data when the queue is full, you stop popping when it is empty. You need the exact same synchronisation for async, just different primitives.


But that's not synchronization between two concurrent things. I can still reason about queue being full in a sequential way.

   select! {
     _ = channel.readable(), if queue.has_free_space() => read(&mut queue),
     _ = channel.writable(), if queue.has_data() => write(&mut queue),
   }
The point is I can implement `has_free_space` and `has_data` without thinking about concurrency / parallelism / threads. I don't need to even think what happens if in the middle of my "has_free_space" check another thread goes in and adds some data. And I don't need to invoke any costly locks or atomic operations there to ensure proper safety of my queue structure. Just purely sequential logic. Which is way simpler to reason about than any SPSC queue.


As I mentioned else thread, if you do not care about parallelism you can pin your threads and use SCHED_FIFO for scheduling and then you do not need any synchronization.

In any case acq/rel is the only thing required here and it is extremely cheap.

edit: in any case we are discussing synchronization and 'has_free_space' 'had_data' are a form of synchronization, we all agree that async and threads have different performance characteristics.


> As I mentioned else thread, if you do not care about parallelism you can pin your threads and use SCHED_FIFO for scheduling and then you do not need any synchronization.

I don't think it is a universal solution. What if I am interested in parallelism as well, only not for the coroutines that operate on the same data? If my app handles 10k connections, I want them to be handled in parallel, as they do not share anything so making them parallel is easy. What is not easy is running stuff concurrently on shared data - that requires some form of synchronization and async/await with event loops is a very elegant solution.

You say that it can be handled with an SPSC queue and it is only one ack/rel. But then add another type of event that can happen concurrently, e.g. a user request to reconfigure the app. Or an inactivity timeout. I can trivially handle those with adding more branches to the `select!`, and my code still stays easy to follow. With threads dedicated to each type of concurrent action and trying to update state of the app directly I imagine this can get hairy pretty quickly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: