Hacker News new | past | comments | ask | show | jobs | submit login
Engineering Faster Sorters for Small Sets of Items (arxiv.org)
38 points by matt_d 6 days ago | hide | past | web | favorite | 9 comments





In the similar topic, I'm trying to find maps optimised for a small maximum number of items (8 or 16) indexed by integers. Curiously I failed to find discussions about this topic.. But std::map seems very badly fit for this kind of map.


Thanks but these structures are size optimised, I'm looking for CPU usage optimised structures.


I will have a look thanks. I think I'll implement it with two arrays one for keys one for values and a loop to find the key index, then return the corresponding value It may be better to fully unroll the loop, I'll have to benchmark this. SSE may be interesting too.

The most interesting thing to me is "the compiler reduces its optimizations with increasing compilation effort, when compiling only a single source file." Is this GCC-specific, or does it also hold for clang? And can it be tuned away?

Clang will not "reduce compilation effort" and I do not believe GCC does that. There are heuristics that look at things like function size but I don't think that is what they mean. Generally speaking, translation units are independent if you do not run link-time-optimizations (LTO) and optimizations follow a (basically) predetermined order which might be influence by the input but not by the amount of work done.

I’ve never heard of or experienced gcc using some kind of cost model for compilation time, either. With all these huge protobuf-generated files you’d think I would have noticed if it existed.

Love papers like this, hyper-focused on a fun problem and then full of hacks.



Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: