
Slightly Skeptical View on Sorting Algorithms - nkurz
http://www.softpanorama.org/Algorithms/sorting.shtml
======
flebron
>Please note that O matters. Even if both algorithms belong to O(n log n)
class, algorithm for which O=1 is 100 times faster then algorithm for which
O=100.

Wait, what? O = 100? That's just... not how it works.

~~~
nightcracker
I think they meant to illustrate that O(100) == O(1), despite a runtime of 100
being a hundred times slower than 1.

------
mqsiuser
> Among complex algorithms Mergesort is stable

> Among non-stable algorithms [...] quicksort [...]

QuickSort can be implemented so that it is stable:
[http://www.mqseries.net/phpBB2/viewtopic.php?p=273722&highli...](http://www.mqseries.net/phpBB2/viewtopic.php?p=273722&highlight=#273722)
(I am author, AMA)

Why is gnu-core-utils-sort implemented as mergesort (also in place, but
slower) ?

Edit: And sorry: In-Place matters: Quicksort is fastest AND uses least memory.

Who the heck can say sth about the imput (that it may be like "pre-sorted" ?!)

~~~
abetusk
As far as I know, Quicksort cannot be implemented to be stable without an
auxiliary array. So implementing Quicksort to be stable destroys the in-place
feature.

If you want something in-place and stable, you'll have to use something like
WikiSort [1] or GrailSort [2].

[1]
[https://github.com/BonzaiThePenguin/WikiSort](https://github.com/BonzaiThePenguin/WikiSort)

[2] [https://github.com/Mrrl/GrailSort](https://github.com/Mrrl/GrailSort)

~~~
mqsiuser
> Quicksort cannot be implemented to be stable without an auxiliary array

Okay, you need an additional array (I am using a separate array, the "result
array") [1]: But that doesn't matter, since the additional array can just grow
(while the partitions/other arrays shrink).

Though my implementation is not cache-aware, which is very interesting and
pretty relevant for performance.

[1] Actually I am using a linked tree data structure: "In-place"... which IS
HIGHLY relevant: It can occur that the input data is large ((already) filling
up (almost) all RAM) and these programs ("Execution Groups") terminate "the
old way", so just 'abend'.

And hence it stands: By the way I have proven that you can implement QuickSort
STABLE _AND_ IN-PLACE

Thank you :) and fix you wording, when saying "Quicksort is..."

------
TheLoneWolfling
The problem with radix sort is the same problem in general with big-O
notation:

A factor of log(n) (or less) difference can often be overwhelmed for all
practical inputs by a constant factor.

~~~
sika_grr
No. For radix sort this constant factor is quite low, so it outperforms
std::sort in most cases.

Try sorting a million integers, my results are: std::sort: 60 ms; radix-sort
(LSB, hand coded, less than 20 lines): 11 ms. It gets even better when you mix
MSB with LSB for better locality.

No, there are no problems with doubles or negative integers. For
sizeof(key_value)<=16 B (8 for key, 8 for pointer), radix is the best sort on
desktop computers.

------
sika_grr
Please note that radix sort is easy to implement to work correctly with
doubles, Table 1 is misleading.

------
whoisjuan
timsort?

~~~
cbd1984
It was invented by Tim Peters for Python, and has been the default there since
2.3. The Java runtime uses it to sort arrays of non-primitive type (that is,
arrays of values that can be treated as Objects).

[http://en.wikipedia.org/wiki/Timsort](http://en.wikipedia.org/wiki/Timsort)

OpenJDK implementation. Good notes in the comments:

[http://cr.openjdk.java.net/~martin/webrevs/openjdk7/timsort/...](http://cr.openjdk.java.net/~martin/webrevs/openjdk7/timsort/raw_files/new/src/share/classes/java/util/TimSort.java)

