Hacker News new | past | comments | ask | show | jobs | submit login

The LLD webpage touts a big speed improvement over gold, which is extremely commendable.

What I'm curious about, though, is memory usage. On relatively constrained systems, linking large projects can take ages due to swapping - anecdotally, linking is responsible for upwards of 80% of the compilation times I see for a certain very large software project on my developer machine (which, at 16GB RAM, isn't huge, but fairly typical). Worse, memory pressure from linking makes the process rather non-parallelizable, which also hurts throughput. Having a memory-efficient linker could significantly speed up compilation beyond just a 2x improvement in such environments.




I shouldn't be bad, although I didn't actually measure its heap usage. Allocating memory and copying data is slow, so in LLD we mmap input object files and use the files directly as much as possible. This increases process virtual memory size (because large files are kept mapped to memory), but the actual memory usage is much smaller than that.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: