

Semaphores are surprisingly versatile - nkurz
http://preshing.com/20150316/semaphores-are-surprisingly-versatile/

======
kazinator
Semaphores are _theoretically_ versatile. However, when they are actually used
to construct higher level primitives, the result are "Rube Goldberg devices".
It is bad engineering.

Also, note that the examples on this page depend on atomic operations in
addition to semaphores, not semaphores alone. If you _only_ use semaphores,
you must also use them to protect the accesses that are required to be atomic,
which complicates the code.

Let's look at LightWeightMutex. This object is so light weight that we can't
even ask it whether it is currently locked, and who the owner is. These
features are important for error detection and debugging: real-world
requirements that actual mutex implementations satisfy.

Another comment; I find the following completely pointless:

    
    
        void lock()
        {
            if (m_contention.fetch_add(1, std::memory_order_acquire) > 0)
            {
                m_semaphore.wait();
            }
        }
    

A semaphore _is already supposed to implement counting_. What we have here is
an implementation of a counting semaphore, using a semaphore.

That is to say, to implement a light weight lock using a semaphore, we
actually need only this:

    
    
       void lock() { m_semaphore.wait(); }
    

The semaphore _already has a built in atomic variable_ equivalent to the
m_contention. The atomic increment and test wrapped around this is redundant,
and has nothing to do with implementing a lock with a semaphore.

~~~
preshing
If I implement the mutex as you suggest -- by using the native semaphore
directly, with no separate counter -- the running time of "testBenaphore"
increases from 375 ms to 3 seconds on my Windows PC.

As mentioned in the article, most mutex implementations already use this
trick. So you can just use std::mutex, and things are fine.

In the past, though, runtime environments weren't so well-developed, so there
definitely [was a point]([http://www.haiku-os.org/legacy-
docs/benewsletter/Issue1-26.h...](http://www.haiku-os.org/legacy-
docs/benewsletter/Issue1-26.html#Engineering1-26)).

~~~
kazinator
That looks like some API/kernel call overhead; you've moved the fast path of
the semaphore implementation into user space. But what you have there is
undeniably a semaphore implementation: atomically tweak a counter, and based
on that result, wait or signal.

~~~
preshing
> you've moved the fast path of the semaphore implementation into user space.

I see now why your original comment was a bit inflammatory. I should have been
more clear in the post that by "lightweight", I meant exactly that: "fast path
in user space". I guess not everyone shares this vocabulary. I'll improve the
post.

You're right that this lightweight mutex is a semaphore, of course. But not
every semaphore is a lightweight mutex. So the technique isn't pointless.

------
peterwaller
I recently found myself in need of a semaphore in go. They're quite easy to
make out of channels:

Implementation:

[https://github.com/scraperwiki/s4log/blob/master/semaphore.g...](https://github.com/scraperwiki/s4log/blob/master/semaphore.go)

Use:

[https://github.com/scraperwiki/s4log/blob/2714f9553880121ad6...](https://github.com/scraperwiki/s4log/blob/2714f9553880121ad6852846a4761f65a22967d8/commit.go#L103-L105)

Useful for:

1\. Writing a web handler which may use some CPU time and you don't want too
many of them in flight at once (to prevent them from all slowing down to mud).
(Best combined with something to abort if we had to wait to long before we
could even do computation)

2\. Execute a parallel upload to S3, but avoid doing too many at once (again,
because you don't want very many of them contending on the network connection
such that none proceed).

------
mischanix
If you're interested in writing concurrent code worth running, Mike Acton has
a nice introduction:

[http://cellperformance.beyond3d.com/articles/2009/08/roundup...](http://cellperformance.beyond3d.com/articles/2009/08/roundup-
recent-sketches-on-concurrency-data-design-and-performance.html)

