
Atomics in Objective-C - ovokinder
http://biasedbit.com/blog/objc-atomics/
======
fleitz
This is called a semaphore, it's already implemented in GCD.

Also, why talk about performance and then make obj-c method calls...?

It's quite easy using NSProxy to create a throttler that will wrap any object,
then you can abstract throttling from the behavior of the underlying object.

    
    
      @interface Throttler : NSProxy {
         dispatch_semaphore_t _semaphore;
         id _object;
      }
      - initWithObject:(id)obj concurrentOperations:(int)ops;
    
      @end
      @implementation Throttler
    
      - (id) initWithObject:(id)obj concurrentOperations:(int)ops {
         if(self = [super init]){
           _semaphore = dispatch_semaphore_create(ops)
           _object = obj;
         }
         return self;
      }
    
      - (void) forwardInvocation:(NSInvocation*)invocation {
         if(dispatch_semaphore_wait(_semaphore,0)){
            @try {
            [invocation setTarget: _object];
            [invocation invoke];
    
            }
            @catch (NSException* e){
              @throw e;
            }
            @finally {
            dispatch_semaphore_signal(_semaphore);
            }
            return;
         }
         @throw [NSException
              exceptionWithName:@"InsufficientResourceException"
              reason:@"Insufficient Resource"
              userInfo:nil];
      }
    
      @end 
    

[https://developer.apple.com/library/mac/documentation/Genera...](https://developer.apple.com/library/mac/documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html#//apple_ref/doc/uid/TP40008091-CH102-SW24)

------
lyinsteve
Why no mention of GCD here? GCD is very, very good at synchronizing access to
shared resources.

The most Cocoa-compatible way of handling background execution of expensive
procedures is always going to be best executed, quickest, using Grand Central
Dispatch.

For example:

    
    
        @interface Foo ()
        @property (nonatomic) dispatch_queue_t backgroundQueue;
        @end
        
        @implementation Foo
    
        - (BOOL) veryExpensiveMethod:(id)arg completion:(void (^)())completion {
            dispatch_async(self.backgroundQueue, ^{
                if (_counter++ > N) {
                    _counter--;
                    return NO;
                }
                // Critical section
                _counter--;
                return YES;
                dispatch_async(dispatch_get_main_queue(), completion);
            };
    

That will ensure every call to -veryExpensiveMethod is run in sequence, and
won't require waiting on your end.

These problems have been solved, better.

~~~
ovokinder
You're missing the most important point of the entire Throttler goal:
gracefully returning _fast_ , with success or failure. Nowhere is it stated
that the goal is to enqueue tasks for execution.

If you had read 'til the end you would have found multiple statements that
OSAtomic* is merely an alternative. Not a silver bullet. Not the fastest.

From the conclusion:

"It's very important to understand that every example in this article could
have legitimately been solved with different concurrency primitives — like
semaphores and locks — without any noticeable impact to a human playing around
with your app."

Also, "(...) is always going to be best executed, quickest, using GCD." is
kind of a blanket statement. I'd be careful around the use of "always".

------
asveikau
> This post talks about the use of OS low level atomic functions

This is a pet peeve of mine, to call that an "OS" feature. In all recent CPUs
I know of, atomic ops are _not_ a privileged operation, and there is
absolutely nothing for the operating system to manage in a traditional sense.
You don't trap into the kernel and have _it_ compare-and-swap, you just, um,
compare and swap.

Maybe your OS provides a convenient C API, but it is not "OS" functionality.
It's just instructions on your CPU. You could just as well write them inline.
In many common uses, that's what ends up happening - the atomic ops are put
inline with the rest of your code.

~~~
ovokinder
Fair. I just didn't know how to rephrase that small sentence without unfolding
into the two paragraphs you just wrote.

How would you rephrase that? Just "low level atomic", "atomic"?

~~~
stcredzero
If atomic operations were compatible across processor "families" then you'd
have "the family atomics." (Obscure Dune reference.)

~~~
ovokinder
That was worth 10 upvotes, good sir. Sadly, I can only provide one.

------
richardwhiuk
This is all claimed to do this for 'performance' but there's no figures in
this document as to whether the incrementAndGet / decrementAndGet is any
faster than @synchronize.

(I suspect it probably is, but fundamentally, @synchronize is implemented
using compare and swap / other processor atomics, so it's probable that the
difference is very slight - e.g. there's only a measurable difference if the
thread is descheduled while holding a lock).

~~~
ovokinder
The goal of the article isn't about sheer performance — there are plenty of
notes about it. If it was about pure performance, it'd be recommending moving
away from objc classes and methods and using C functions or C++ classes
instead, like std::atomic<>.

It's meant to be a somewhat-easy-to-digest introduction to lock-free design,
where applicable.

What @synchronized ends up doing is far more complex — it has to be, to ensure
the correctness of its purposes: [https://github.com/opensource-
apple/objc4/blob/master/runtim...](https://github.com/opensource-
apple/objc4/blob/master/runtime/objc-sync.mm)

------
liuliu
Or just use std::atomic and other std::mutex in Objective-C++. Under
Objective-C world, no memory semantics are well-defined, and all these are
hacks on pile of other hacks.

~~~
azinman2
Can you explain more about hacks piled on other hacks?

