Reference counting steal cycles all over the place. The counting itself steals cycles, and the effect of dropping a reference is unpredictable: whichever module happens to drop the last reference to an object which is the last gateway to a large graph of objects will suffer a storm of recursive reference count drops and deallocations.
If you have a limited number of file handles, you may want them closed ASAP, and not when some reference-counting mechanism or GC decides. Reference counting is not ASAP. Typically, you have some smart pointers which will drop the reference in relation to some lexical scope. That could be too late: the file handle object could be in scope over some long running function, and so the file is held open. The fix is to call the close method on the object, and then let refcounting reap the object later. (Woe to you if you decide to manually take over reference counting and start hand-coding acquire and release calls. Been there, debugged that.)
I implemented weak hash tables in a garbage collector; it's neither complicated nor difficult. Under GC, we use weak referencing for essential uses that requires the semantics, not as a crutch for breaking circular references.
The effect of dropping a reference is sometimes predictable. For example, Color cannot root a large object graph, so dropping a reference to Color will deallocate at most one object. At least it does not require nonsense like Android's allocation-in-onDraw warning.
I worked on the now-deprecated GC for Cocoa frameworks, and we made heavy use of weak references for out-of-line storage. This put us at risk for cycles: if A references B through a weak-to-strong global map, and B in turn references A, we have an uncollectible cycle even under GC. This represented a large class of bugs we encountered under GC.
So both GC and RC have their classes of cycles and unpredictable behavior. I've come to believe that these techniques are more related than we'd like to admit, and the real difference is in their second order effects. For example, GC enables lockless hash tables, which require hazard pointers or other awkward techniques under RC. On the other hand, RC enables cheap copy-on-write, which is how Swift's collections can be value types.
An atomic increment/decrement takes so little time as to make this irrelevant. If you're in such a tight loop that you care about a single increment when calling a function (to pass a parameter in), you should have inlined that function and preallocated the memory you're dealing with.
I'm talking about general use of smart pointers, which means that there's a function call involved with the smart pointer value copy, and throwing an increment in is trivial by comparison.
>whichever module happens to drop the last reference to an object which is the last gateway to a large graph of objects
When writing games, I don't think I ever had a "large graph of objects" get dropped at some random time. Typically when you drop a "large graph" it's because you're clearing an entire game level, for instance. Glitches aren't as important when the user is just watching a progress bar.
And you can still apply "ownership semantics" on graphs like that, so that the world graph logically "owns" the objects, and when the world graph releases the object, it does so by placing it on the "to be cleared" list instead of just nulling the reference.
Then in the rare case where something is holding a reference to the object, it won't just crash when it tries to do something with it. In this rare case a release could trigger a surprise extra deallocation chain, as you've suggested.
If that's ever determined to be an issue (via profiling!) you can ensure other objects hold weak references to each other (which is safer anyway), in which case only the main graph is ever in danger of releasing objects -- and it can queue up the releases and time-box how many it does per frame.
Honestly having objects reference each other isn't typically the best answer anyway; having object listeners and broadcast channels and similar is much better, in which case you define the semantics of a "listener" to always use a weak reference, and every time you broadcast on that channel you cull any dead listeners.
Aside from all of that, if you're using object pools, you'd need to deallocate thousands, maybe tens of thousands, of objects in order for it to take enough time to glitch a frame. Meaning that in typical game usage you pretty much never see that. A huge streaming world might hit those thresholds, but a huge streaming world has a whole lot of interesting challenges to be overcome -- and would likely thrash a GC-based system pretty badly.
If you have a limited number of file handles, you may want them closed ASAP, and not when some reference-counting mechanism or GC decides. Reference counting is not ASAP. Typically, you have some smart pointers which will drop the reference in relation to some lexical scope. That could be too late: the file handle object could be in scope over some long running function, and so the file is held open. The fix is to call the close method on the object, and then let refcounting reap the object later. (Woe to you if you decide to manually take over reference counting and start hand-coding acquire and release calls. Been there, debugged that.)
I implemented weak hash tables in a garbage collector; it's neither complicated nor difficult. Under GC, we use weak referencing for essential uses that requires the semantics, not as a crutch for breaking circular references.