Hacker News new | past | comments | ask | show | jobs | submit login

async/await is about superimposing useful CPU computations with slow I/O operations.

For example, you would read data fragment D0 from a network socket, and right before doing any work on it, you ask the OS to fetch data fragment D1, etc. This would is faster than reading and working in distinct time intervals. And despite whatever you seem to believe, it would also be faster than reading and working using thread parallelism. Because even if you eliminate the synchronization overhead or you devise a good lock-free algorithm, thread context switches still have a massive overhead, not to mention issues related to memory bandwidth and cache coherence.

Still, how does one gather the nerve to call a feature present in most modern languages, from C# to Zig, a hype?




I should add that the concurrent algorithm I described for processing data from a socket, i.e asynchronously read data fragment (i + 1) then do work on data fragment i, would only be optimal if the throughout of the work you do on the data is higher than reading throughout.

Even if the above condition doesn't hold, you would have to be very careful to do better with threads.

Is it just an arbitrary design decision that NodeJS is single-threaded?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: