Hacker News new | comments | show | ask | jobs | submit login

What's missing from your article is a demonstration or theory that outlines how transmission over CPU-interconnects of immutable messages as blocks of memory outperforms the cache coherence algorithms already in place.

Immutable message transmission has no way to say "I already know about this message; don't send it again."

Most of the good characteristics of Erlang that you describe are achievable in other languages.

So what I'm asking is, by what concrete mechanism does immutable message passing outperform cache coherence?




It's not about raw performance. It's about getting shit done and keeping shit highly available. In large systems, shared data isn't mentally optimal to work with (see: every multithreaded program with shared data segfaulting for some unknown reason this very second).

We can always drop down to C or asm to make ultra-shared data structures with minimal overhead. We could always do that. But people end up wanting to spend their precious lives making a difference in the world and not checking the return value of malloc thirty times per day.


> But people end up wanting to spend their precious lives making a difference in the world and not checking the return value of malloc thirty times per day

So true!


Immutable message passing is an abstraction. In a optimized virtual machine it may well be implemented by leveraging cache coherence. The sender process writes memory block. This places it into its local cache. Then the receiver process reads this memory block. This uses cache coherence to sniff the block directly from the sender's cache. This is awesome way to leverage parts of cache coherence protocol that play well with immutable memory blocks. It can also scale well if sniffing is not a broadcast. Cache coherence can quickly get ugly when a cache line is contested for writes--be this false sharing or a spin lock. My point I guess is that immutability and message passing promotes awesome parts of cache coherence and limits the need for the ugly parts.


I don't think he's arguing that message passing will generally outperform shared state with cache coherence. Particularly not for applications which are designed well. The argument (as I understand it) is that since the hardware is fundamentally doing message passing, (and since that will become even more of a problem as hardware continues to scale) it will be easier to design an application that performs well (but maybe not optimally) if you use the message passing paradigm, and easier for your application to perform poorly if you don't. You're right though that this isn't exclusive to erlang, but applies to all languages which eschew shared state.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: