
Efficient lock-free durable sets - feross
https://blog.acolyer.org/2019/12/02/efficient-lock-free-durable-sets/
======
kabdib
I kept having this sense of deja-vu as I read the article. This is essentially
how the persistent store of the Apple Newton worked. Pretty neat.

Our stuff supported vanilla battery-backed RAM, and we also supported memory-
mapped flash (which has implications for how you represent the object state,
since you can only flip bits in one direction without an expensive and
destructive erase operation).

Recovery after a crash was essentially a read pass of the whole device, which
was fine for the storage capacities of the day, at memory-bus speeds.

------
qtplatypus
The summary misses out on some key details that are most likely in the paper
but are really needed to evaluate this.

How do you ensure that two nodes are not initialised at the same place?

~~~
scott_s
Lock-free data structures tend to use the compare-and-swap atomic operation as
a primitive: [https://en.wikipedia.org/wiki/Compare-and-
swap](https://en.wikipedia.org/wiki/Compare-and-swap)

You provide three values to a CAS operation: the memory address you want to
change, the value you _expect_ to be there and the new value you _want_ to be
there. If the expected value is not what is at that memory address, the CAS
fails; presumably, the expected value is not there because another thread's
CAS succeeded before you. You now need to do try again (and maybe do some more
work before doing so).

You can see this in Section 3.3, "The insert operation":
[https://arxiv.org/abs/1909.02852](https://arxiv.org/abs/1909.02852)

I think this answers your question: algorithms for data structures like this
one use primitives like CAS to ensure that the data structure remains valid
even when multiple threads are modifying it, and all without mutual exclusion
mechanisms likes locks and mutexes.

