Hacker News new | past | comments | ask | show | jobs | submit login
Simple, Fast and Safe Manual Memory Management (2017) [pdf] (microsoft.com)
60 points by palerdot on April 29, 2019 | hide | past | favorite | 12 comments



> we just add a delete operator to free memory explicitly and an exception which is thrown if the program dereferences a pointer to freed memory.

Is there any sensible thing to do when this exception is caught, appart from halting the program? If the answer is "no", why have an exception at all?


Long ago I worked on a 3D scene exporter for 3DS MAX. It was complicated, unreliable code built on top of a complicated, unreliable interface. I put a big structured exception handler around the entire Export() process so that when random stuff went horribly wrong, I could pop up a dialog saying "The export failed. But, at least MAX didn't crash. An error has been logged. You should back up your work and restart MAX now."


this alone is the main selling point of exceptions


Because halting every time after the memory error happens is better than the system operating under the false assumption that the pointer is viable.

Then they can patch it and re-deploy.


Here's a video by one of the team members:

https://www.youtube.com/watch?v=C07s5LTuTmE


Thanks, that's very helpful.


Access to deleted objects is detected by marking objects deleted and not reclaiming their memory until the containing pages are mostly full of deleted objects, then they relocate still-live objects and unmap the pages. That is, they use MMU tricks but they amortize the MMU cost (otherwise performance would be absolutely abysmal).

Note that they still need to rewrite object references, which is a lot like a GC. EDIT: No, sorry, they lazy patch references, though this requires some overhead.

Also, this does nothing to protect against leaks.

Interesting idea, but too-little-too-late. I think Rust is the better answer for now.

EDIT: Great question from the Q&A of the video, "why are you not getting killed by TLB misses?", and the presenter did not know. This is a really important question.


I just skimmed the paper since I should really get to bed, but:

> We do not guarantee that all dereferences to deleted objects will throw an exception.

What happens in the case where the dereference doesn’t throw an exception? Execution proceeds as if the dangling pointer pointed to the “zombie” object?


the paper says that if the referenced object is not yet destroyed, the access is valid and the execution will proceed.


I wonder what would happen if you just keep on allocating and never free anything. The system would eventually swap out everything that should be freed. A 64-bit process has plenty of address space, and hard disks are huge.

Of course, there is probably some huge painfull punishment in store if you try this in real life. I just wonder which. Your average GUI process that shuts down every day or so maybe wouldn't suffer too much.


Fragmentation will mean that a very much larger number of pages than is needed to store the heap will be frequently referenced, thus thrashing. Even with SSDs thrashing would be too painful.


I really like this paper. There's a large class of problems where tracing GC is very suboptimal from a performance/efficiency perspective but classic manual memory management is unacceptable from a security/correctness perspective.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: