Hacker News new | comments | show | ask | jobs | submit login

In that case, most other algorithms (e.g. insertion sort) technically take linear time too (though with a rather impractical constant factor).

Can you elaborate on that. It is proven that sorting based only on comparisons has a lower bound of nlog(n) comparison operations. Given that algorithms such as insertion sort use only comparison operations to probe the data, I do not see how they can break the bound.


The O(n log n) bound is a worst-case upper bound, which is a tight bound if all elements are distinct. It doesn't apply if there are guaranteed to be a sufficient number of duplicates.

For example, one might implement a variant of InsertionSort that stores a duplicate count for distinct elements, and then an insertion sort on e.g. 32-bit integers would require at most 4294967296 comparisons per insertion -- a constant factor that can be technically ignored in the complexity analysis. (I did warn you that the constant factor would become unwieldy!)

Note that this doesn't require values to be integers -- it suffices for them to be comparable with a lot of duplicates. The variant of InsertionSort described above would require O(k×n) comparisons where `n` is length of the input list and `k` is the number of distinct values, or O(n) whenever `k` is bounded by a constant (as required for radixsort to work in linear time).

That's not to say that radixsort doesn't outperform other sorting algorithms in practice -- it usually does. However, that isn't obvious from a strictly complexity theoretic point of view.


What's nlogn for a known n? A constant - therefore O(1).


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact