Extremely basic article, written in poor English, fails to explain the real reasons GC isn't used in the places where it isn't used, full of vague pronouncements ("it makes execution flow very un-predictable and hence GC becomes almost mandatory").
And it misses the big advantages of garbage collection: by allowing you to handle arbitrary-sized data structures as easily as if they were integers, it makes your programs easier to understand — not just easier to read!
Um, where I was complaining about the quality of the prose in the original article, I wrote "not just easier to read" when I meant "not just easier to write". Serves me right, I guess.
I wrote a C++ class (compatible with STL's vector class), and it had an internal reference counter. If the counter went to 0, it would de-allocate the memory and shut the vector down. It works pretty well, and avoids the need for a garbage collector.
It uses constructors and destructors to achieve this.
That "works" (minus circular references) in simple cases, but it still has bad worst case performance (if your decrements chain, causing many decrements and frees).
And in something like boost::smart_ptr, the cache performance is awful because the increment/decrement is an extra indirection in a separate allocation, which is getting hit constantly.
Actually, general malloc performance is horrible and that refcounting implimentation will be the absolute worst. In an accurate GC, the objects can be moved around in memory for more efficient allocation. Add support for generations and it becomes significantly faster than manual allocation. However, refcounting misses all of that and screws up on circular referrences to boot.
The purpose of the reference count is to only deallocate memory once. You create it (copy constructor, count = 1), you pass it to a function (count = 2), come out of the function (destructor, count = 1) and finally return from the original function (count = 0, deallocate). It's like passing a hashtable around in Lisp - it doesn't pass a copy, it passes a pointer, and thus is more efficient on memory usage and speed.
It's a bit hard to explain, but it's a copy constructor. Not sure of all the details right now, but I think when it is passes to another function (as a parameter), the reference count goes up, and when it returns (destructor), it goes down. I don't think it can be circular, as it uses types, not pointers. The basic idea is the data stays in the same place in memory.
I hate the whole argument of the "problem" of hanging/dangling pointers as reason for GC. Is GC only for covering up your n00b mistakes? GC will never be a reasonable management scheme for realtime or limited memory systems, where programmer responsibility and predictable behavior is required.
In the case of dangling pointers, garbage collection just converts a potential heap corruption bug into a probable memory leak instead. Not really an obvious win. At least in C you have to be conscious of memory management in order to successfully write programs that don't crash.
Now, with real-time GC systems comming to light and mature GC implementations in VMs like hotspot and conservative GC (boehm GC) for languages like C, I think the correct question is why not to use GC.
And it misses the big advantages of garbage collection: by allowing you to handle arbitrary-sized data structures as easily as if they were integers, it makes your programs easier to understand — not just easier to read!
Not worth reading.