Long ago I worked on a 3D scene exporter for 3DS MAX. It was complicated, unreliable code built on top of a complicated, unreliable interface. I put a big structured exception handler around the entire Export() process so that when random stuff went horribly wrong, I could pop up a dialog saying "The export failed. But, at least MAX didn't crash. An error has been logged. You should back up your work and restart MAX now."
Access to deleted objects is detected by marking objects deleted and not reclaiming their memory until the containing pages are mostly full of deleted objects, then they relocate still-live objects and unmap the pages. That is, they use MMU tricks but they amortize the MMU cost (otherwise performance would be absolutely abysmal).
Note that they still need to rewrite object references, which is a lot like a GC. EDIT: No, sorry, they lazy patch references, though this requires some overhead.
Also, this does nothing to protect against leaks.
Interesting idea, but too-little-too-late. I think Rust is the better answer for now.
EDIT: Great question from the Q&A of the video, "why are you not getting killed by TLB misses?", and the presenter did not know. This is a really important question.
I just skimmed the paper since I should really get to bed, but:
> We do not guarantee that all dereferences to deleted objects will throw an exception.
What happens in the case where the dereference doesn’t throw an exception? Execution proceeds as if the dangling pointer pointed to the “zombie” object?
I wonder what would happen if you just keep on allocating and never free anything. The system would eventually swap out everything that should be freed. A 64-bit process has plenty of address space, and hard disks are huge.
Of course, there is probably some huge painfull punishment in store if you try this in real life. I just wonder which. Your average GUI process that shuts down every day or so maybe wouldn't suffer too much.
Fragmentation will mean that a very much larger number of pages than is needed to store the heap will be frequently referenced, thus thrashing. Even with SSDs thrashing would be too painful.
I really like this paper. There's a large class of problems where tracing GC is very suboptimal from a performance/efficiency perspective but classic manual memory management is unacceptable from a security/correctness perspective.
Is there any sensible thing to do when this exception is caught, appart from halting the program? If the answer is "no", why have an exception at all?