Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They address that somewhat in the discussion section:

> C++ provides many features for more explicit memory management than is possible with reference counting. For example, it provides allocators [35] to decouple memory management from handling of objects in containers. In principle, this may make it possible to use such an allocator to allocate temporary objects that are known to become obsolete during the deallocation pause described above. Such an allocator could then be freed instantly, removing the described pause from the runtime. However, this approach would require a very detailed, error-prone analysis which objects must and must not be managed by such an allocator, and would not translate well to other kinds of pipelines beyond this particular use case. Since elPrep’s focus is on being an open-ended software framework, this approach is therefore not practical.



Custom allocators are a relatively niche topic, and I agree that library users cannot be expected to customize memory behaviour to that level of detail (especially if the library users typically are not professional developers).

However, a certain level of proficiency must be expected, and in the case of C++ this includes "know when to use const&, unique_ptr<T> or shared_ptr<T>". If this cannot be expected of the user, the comparison becomes less a question about performance and more about which language is the best at being the lowest common denominator.


Of course you can use better allocators; but it's faster to avoid dynamic allocation (e.g. by pointing to memory mapped from the input file by the OS) altogether. If they allocate memory for each flyspeck of a 200 GB file and also create and change a reference counter for it, nobody should be surprised about the low performance. Have a look what e.g. shared_ptr does behind the scenes.


Unless you're streaming, in which case mmap'ed access on Linux is generally slower than read/write. At least it was the last time we checked for the Ardour project (probably about 3 years ago).



That's not even the real benefit you get. What you really get is the ability to specify memory locality which easily has a 10-30x performance impact.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: