Hacker News new | past | comments | ask | show | jobs | submit login

Why was this done?

Did rust become less dependent on allocator performance, or did system allocators improve enough? IIRC glibc malloc has improved a lot over the last few years, particularly for multithreaded use, but I don't know about windows / macOS.




So, long ago, Rust actually had a large, Erlang-like runtime. So jemalloc was used. Over time, we shed more and more of this runtime, but jemalloc stayed. We didn't have a pluggable allocator story, and so we couldn't really remove it without causing a regression for people who do need jemalloc. Additionally, jemalloc was already removed on some platforms for a long time; Windows has been shipping the system allocator for as long as I can remember.

So, now that we have a stable way to let you use jemalloc, the right default for a systems language is to use the system allocator. If jemalloc makes sense for you, you can still use it, but if not, you save a non-significant amount of binary size, which matters to a lot of people. See the parent I originally replied to for an example of a very common response when looking at Rust binary sizes.

It's really more about letting you choose the tradeoff than it is about specific improvements between the allocators.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: