Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Mutable shared state is a feature, not a bug.

It's true that it's easier to write correct async code using immutable shared data or unshared data.

However, it's very hard if not impossible to do fast and low memory concurrent algorithms without mutable shared state.





Idk, it's a general rule of thumb that the more mutable shared state an algorithm has, the worse it scales. So if you're trying to scale something to be concurrent, mutable shared state is an antipattern.

At scale, algorithms are commonly limited by memory bandwidth, not concurrency. Most code can be engineered with enough cheap concurrency to efficiently saturate memory bandwidth.

This explains why massively parallel HPC codes are mostly minimal mutable state designs despite seemingly poor theoretical properties for parallelism. Real world performance and scalability is dictated by minimization of memory copies and maximization of cache disjointness.


It's certainly true that some things are limited by memory bandwidth. But it's also common to be limited in other ways.

as noted someone else, it is lock contention that doesn't scale, not mutable shared state. lock-free data structures, patterns like RCU ... in many cases these will scale entirely appropriately to the case at hand. A lot of situations that require high-scale mutable shared state have an inherent asymmetry to the data usage (e.g. one consumer, many writers; many consumers; one writer) that nearly always allow a better pattern than "wrap it in a mutex".

Mutable shared state is literally the nature of contention. It's true that locking is the mediocre default, but "avoid locks" is not a silver bullet. Alternatives have their own tradeoffs. If you "carefully design" a solution, it's probably because you're not just using an alternative but actually taking care to optimize, and because you have a specific use case (which you described).

No, it's the mutable shared state that is the problem. Lock contention is just downstream of the same problems as any other mutable shared state.

> patterns like RCU

RCU isn't mutable shared state! It's sharing immutable state! That's the whole paradigm.


What do you think the "update" part of RCU actually does?

RCU publishes a new pointer after Copying and Updating the copied state. It's not mutating in-place. You don't use it for frequent updates.

https://en.wikipedia.org/wiki/Read-copy-update#Name_and_over...


You may not use it for frequent updates; we do.

There is state.

It is shared state.

It is mutable state.

There are readers, who can see the shared state.

There are writers, who copy it, update it and push it back so that the new state is visible to everyone else.

I have no idea what your definition of mutable shared state might be if that doesn't fit it.


Nothing is preventing you from writing or using lock-free data structures or other solutions in Rust.

It's lock contention that slows things down more than anything.

But it's really an 'it depends' situation.

The fastest algorithms will smartly divide up the shared data being operated on in a way that avoids contention. For example, if working on a matrix, then dividing that matrix into tiles that are concurrently processed.


> It's lock contention that slows things down more than anything.

It's all flavors of the same thing. Lock contention is slow because sharing mutable state between cores is slow. It's all ~MOESI.

> The fastest algorithms will smartly divide up the shared data being operated on in a way that avoids contention. For example, if working on a matrix, then dividing that matrix into tiles that are concurrently processed.

Yes. Aka shared nothing, or read-only shared state.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: