True, but on these systems, most of the time, users don't care about performance. And when they do, the dataset is not large enough for this kind of algorithms to shine anyway.
Cached reads/writes arent't free either, so if overhead drastically reduces those, you may come out on top 100% of the time.
> Wolfsort is primarily a proof of concept, currently it only properly supports unsigned 32 bit integers.
A lot of sorts can beat a generic sort on an array of numbers by gathering statistics, paying better attention to locality, implementation details like use of SIMD or some combination of those three.
GPUs can be used for training but there is no requirement for the model to be executed on GPU. For example, a network can simply be created for incrementing a counter which could be what a "choose the best sort" network ends up doing.
I guess someone will just have to try this.
It begins with quicksort, it switches to heapsort when the recursion depth exceeds a level based on (the logarithm of) the number of elements being sorted and it switches to insertion sort when the number of elements is below some threshold.
I think machines can do this too.
This repo contains a few thousand lines of code without so much as a single comment. In my experience that tends to happen much less in other languages. And it's not like this C code is obvious.
It's not great anymore. The software world has moved on to understanding how important these things are (and even often takes things too far), but C seems a bit stuck in its ways.
Java and C# did not replace C. They're typically used for different tasks.
Objectively speaking, do we have evidence these enterprise OOP practices lead to better software?
I wonder if wolfsort is stable and faster because it utilizes quadsort?
Which makes it pretty useless outside of theoretical exercises(that is not saying that theoretical exercises would be useless). What would be a practical use case to sort integers? Well, the integers can be keys to some kind of records, but if your records remain unsorted accessing them is very expensive even if you have the sorted list of keys available.
Edit: Calculating the median is a practical case. Not sure how often that is needed in performance-critical contexts.
Any set of tuples/sequences can be lexicographically radix-sorted if its components can be; strings are a special case, where the components are the individual character codes.
Floating point numbers are integers with the high-order bits used for the exponent. When normalized, they can be treated as integers for sorting.
All radix sort requires is a recursively partitionable domain, and it performs optimally when the partitioning scheme matches the data distribution.