Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Does reference counting GC really use less memory than tracing GC (flyingfrogblog.blogspot.com)
9 points by jordwalke on Dec 25, 2017 | hide | past | favorite | 4 comments


The comments go into detail about whether or not it's valid to conclude that the results seen are an effect of the garbage collection mechanism, but the impression i'm left with is that the author is justified in saying that there isn't enough evidence to state that reference counting is superior to tracing, and the comment chain just solidifies that for me.


The advantage of reference counting is determinism. Memory will be freed when the last reference is removed.

A GC will periodically scan the heap and free memory of an entire batch which makes it more efficient but undeterministic.


I think by “determinism” you mean “predictable” because OCaml’s GC is also deterministic (see Core_Bench benchmarking blog post which shows how you actually need to account for this determinism when benchmarking).

You could make the claim that RC is more predictable by humans. I think you just have to decide if that’s true for current and specific implementations of RC and tracing. I am not convinced it is an inherent property of RC - and that the same or greater levels of predictability cannot be achieved in hypothetical or even existing implementations of tracing. What if you only performed traces during times that the user was not interacting? If you can explicitly control that, couldn’t that be even more “predictable” than when releasing a final RC reference and hoping it doesn’t cause a cascading chain of destructors of hard-to-predict size?


With RC, you might be able to predict when a final reference count goes to zero (though in practice in large apps that might be difficult - I don’t have enough experience) but it doesn’t seem to help you predict the size of the memory graph released as a result, and therefore the time it might take to release it. The size could be determined by lists of arbitrary length that cannot be known statically. The size could be determined by implementation details intentionally hidden from the API consumer to preserve encapsulation. I’m still willing to concede that RC is somewhat more predictable for current RC and tracing implementations, but in my brief experience I’ve observed that this predictability isn’t as significant as you would hope, and it’s very hard to make use of those predictions. One of the ways people try to make use of this predictability is by shipping references off to another thread to perform the decrement count off the main thread in case it hits zero and cascades into a large collection stall. That isn’t really a satisfactory solution and has its own problems.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: