Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

unwrap() can very very often be avoided in favor of a simple ? operation, the amount of nesting you suggest does occur... but not in a single function so it's spread over many lines and functions. There are GC libraries for Rust if you need them. Or Python/Lua bindings that have it too.

Have you considered that avoiding race conditions isn't hard and you can't avoid having to avoid them even in JS?



"Race conditions of threads with shared memory parallelism" - yes you can avoid those in JS, the "yield" points are obvious in all code (await). Although its there if you need it (SharedArrayBuffer)

Regarding whether its hard or not, yes, I have some first-hand (non-Rust) experience.

On second thought I agree with your original point. Node isn't great for CPU-efficient + multithreaded. But

* most platforms aren't (either not great because unsafe, non-cpu-efficient or both) * most of the time its not what you really need.

So a good question would be, what would you use shared memory parallelism for in typical back-end programming?


I've developed plenty of shared memory software. In 99% of cases, a well placed mutex lock will solve the problem entirely. Of the remaining percent, maybe 5% are the type where you can just not lock and it'll work out sufficiently often. Another 45% might benefit from designing a channel/queue. The reamining 50% need a specialized lock to run well.

But all shared memory problems can, without much difficulty, be solved by using a mutex. It might not be best performance but you'll likely not need that much performance in the first place (after all, you're coming from the JS ecosystem).

Memory barriers aren't particularly hard to understand either, I've written some systems using them that ran very stable. Same for lockless or atomic or reentrant algorithms. It's all very fun and doesn't take that much skill if you're willing to read into it.


And plenty of people have said otherwise, and have developed entire languages and formal method frameworks to better reason about these problems. (e.g. TLA+)

So anyway... what would you use shared memory parallelism for in typical back-end programming?


You use the advanced methods to squeeze out more performance, it is not necessary to actually build solutions.

There is plenty of reasons for shared memory in back-ends, workers queues with zero-copy messaging would be one example that a lot of applications can benefit from.


Zero-copy message queue would be an unnecessary micro-optimization for most backends I've worked on. I don't think its something you would normally do in typical back-end programming, unless your scale is bonkers-level or you're creating infrastructure for others to use (maybe if you're doing analytics for others? except there are other alternatives there...). Its definitely not a niche targeted by node.


I don't think it's a microptimization, it's not that terribly complicated if you are careful about the code you write and it's definitely fun.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: