
How Rust Achieves Thread Safety - nxnfufunezn
http://manishearth.github.io/blog/2015/05/30/how-rust-achieves-thread-safety/
======
Animats
That's a good subject. This Rust blog entry[1] is perhaps a better
explanation.

When you pass data to another thread, there are three options: 1) pass a copy,
2) hand off ownership to the other thread, and 3) transfer ownership to a
mutex object, then borrow it from the mutex object as needed. All of these are
memory and race condition safe due to compile time checking.

Those are the concepts. The Rust syntax needed to support it is somewhat
complicated, but if you get it wrong, you get compile time error messages.

This may be the biggest advance in software concurrency management since
Dijkstra's P and V. Almost everything in wide use is either P and V under a
different name, or some subset of P and V functionality. Locks are not a basic
part of most languages; the language doesn't know what a lock is locking. The
Ada rendezvous and Java synchronized objects are exceptions. Those were good
ideas, but too restrictive. Finally, we're past that.

Go could have worked this way. Go originally claimed to be concurrency safe,
but it's not. You can pass a reference across a channel, and now you're
sharing an unlocked data object. This is easy to do by accident, because
slices are references. Because Go is garbage collected, it's almost memory
safe (there's a race condition around slice descriptors that can be
exploited), but it doesn't protect the program's data against shared access.
In Rust, when you pass an non-copyable object across a channel, the sender
gives up the right to use it, and the compiler enforces that.

[1] [http://blog.rust-lang.org/2015/04/10/Fearless-
Concurrency.ht...](http://blog.rust-lang.org/2015/04/10/Fearless-
Concurrency.html)

~~~
detrino
That's not true, Rust permits opt-in data races in safe code.

~~~
dbaupp
If what you say is true, it's a bug: there should only be a risk of data races
if `unsafe` is used. Do you have a code example?

~~~
veddan
I'm guessing he refers to atomics with relaxed memory ordering. That doesn't
give much in terms of guarantees beyond atomicity and no "out-of-thin-air"
values. I'm not sure of whether this is a data race under Rust's definition
though.

~~~
detrino
Rust doesn't get to redefine "data race".

~~~
pcwalton
It doesn't. We use the same definition of "data race" as tools like Thread
Sanitizer and Eraser do.

~~~
detrino
From tsan documentation: "A data race occurs when two threads access the same
variable concurrently and at least one of the accesses is write"

~~~
pcwalton
That's right, and Rust bans that. The definition of "concurrently" typically
means "without synchronization in between". An atomic access is by definition
a form of synchronization.

~~~
detrino
Relaxed access of atomics performs no synchronization.

~~~
Gankro
It guarantees that the value read was _some_ value that was in the variable at
some point. e.g. it prevents a value that toggles from 3 to 5 being read as
117. It also prevents writes from being eaten (e.g. an increment always
actually occurs).

This necessitates some level of synchronization.

~~~
Manishearth
(It also guarantees happens-before for a given thread, so if a thread changes
it from three to 5 and other threads are not changing it back, that thread
will never read a 3 again. This also needs some level of synchronization)

