
Optimizing Inefficiency: the quest for the worst sorting algorithm - sytelus
https://medium.freecodecamp.org/optimizing-inefficiency-human-folly-and-the-quest-for-the-worst-sorting-algorithm-c0ba7b32ffd
======
commandlinefan
I like sleepsort:

    
    
        for (int i = 0; i < in.length; i++) {
          Thread t = new Thread(new Runnable() {
            public void run() {
              Thread.sleep(in[i]);
              out.append(in[i]);
            }
          }
          t.start();
        }

------
Nasrudith
The bead sort is a really smartass solution that could be considered an
instructional example of the benefits of parallelization as well as
exploitation of situational advantages.

Gravity effectively gives infinite free "threads" for the falling and it
engages in accounting tricks with assumptions - normally the starting state
set up is taken for granted because either it is already there or would always
have to be done the same way (no getting around reading in the N numbers)
which is more involved in a physical system.

I liken it to being asked to give the value of pi and giving the answer as a
red circle split by a green line in the center and writing length of red
divided by the length of green. Technically correct and efficient in
calculations but extracting it involves more work not in the solution.

~~~
yalue
I agree, and wrote a lengthy comment below about why this "algorithm" would
actually be considered exponential time using the accepted definition of time
complexity in theoretical CS.

------
coldcode
Having spent some time learning counterpoint, it interesting to see how
complex it is, and how completely baffled I was to see how Bach's works seemed
to be effortless compared to my uninspired attempts.

------
longer_arms
Surely the time complexity of the abacus sort is O(k _sqrt(n)+c_ n)? What with
the time taken for the beads to fall being proportional to the square root of
the distance under freefall and proportional to n itself when at terminal
velocity? (so strictly O(n)). Radix sort on a known range of integers has some
similar-ish properties.

~~~
yalue
I was going to comment on this already, but even that's way too "fast" based
on the accepted notion of time complexity in theoretical CS... abacus sort
should technically be an exponential-time algorithm! (And yes, I disagree with
the wikipedia article on this topic, which lists several "possible" time
complexities, all of which I consider incorrect.)

The reason being that the accepted definition of time complexity in formal
literature is standardized not based on the number of "things" in the input
(numbers in a list, nodes in a graph, etc.), but _on the length of the input
string_. This is a common pitfall when people reason about time complexity.
Here's why this applies to abacus sort:

1\. Abacus sort contains integers in its input string--and based on the
conventions of formal time complexity, these can not be encoded in unary (if
unary encodings were allowed, then several famous NP-Complete algorithms, e.g.
bin packing, _would_ have polynomial time solutions based on the length of
their input)

2\. The _value_ of an integer is _exponentially_ related to its length. For
example, making a binary integer only one bit _longer_ can double its _value_
, making it two bits longer will quadruple its value, etc.

3\. The input to a sorting algorithm like abacus sort is a list of numbers.
Even if there are n numbers, the number of "beads" you need to simulate is
_exponentially_ proportional to the length of the longest single number.

4\. Even if you think that the number of beads can be simulated with an
infinite number of threads, or something, you'd still need to decode the (at
least) binary input to a unary number of beads, which will take an
exponentially-related amount of memory.

Of course, all this is not to say the algorithm isn't cool in practice, for
numbers in ranges that reasonable people care about. It's similar to a weakly-
NP-complete problem ([https://en.wikipedia.org/wiki/Weak_NP-
completeness](https://en.wikipedia.org/wiki/Weak_NP-completeness)) like
integer bin packing, where the time complexity is dependent on the longest
number and therefore technically exponential, but in practice solved for any
reasonable ranges of values in polynomial time.

Edit: By the way, this is why the naive prime-number-checking algorithm (check
divisibility of a value v by all values up to sqrt(v) is _not_ an efficient
algorithm even though many people incorrectly think it's O(sqrt(n))!

It technically takes exponential time relative to the length of bits in the
number, so it's basically useless for guaranteeing primality of something like
a 4096-bit number.

~~~
esrauch
> 3\. The input to a sorting algorithm like abacus sort is a list of numbers.
> Even if there are n numbers, the number of "beads" you need to simulate is
> exponentially proportional to the length of the longest single number.

I think this is where we'd mismatch: I'd assume baked into the problem space
is that you have N numbers to sort, they are each of size K where K is fixed
and bounded. As a solution to "sort these N 16-bit ints, for some arbitrary
N". It would be same problem space that can be solved in O(n) with
bucket/counting sorts.

~~~
yalue
This is true, but note that the problem in general doesn't specify that the
ints are only 16-bit. It's already well established that "linear time" sorting
algorithms exist (e.g. radix sort) for numbers that are all a bounded size.
However, the key is still that the length of each number must be bounded,
because that ensures that the length of the input relates to the number of
numbers in the list to be sorted, rather than to the length of the largest
number. (The Wikipedia article on radix sort gets this right... Just not the
one on abacus sort)

------
murkle
Repeat until sorted(shuffle(list))

~~~
rozim
aka bogosort

------
glitchc
An idea: Use Bitcoin’s PoW as the function for Worstsort.

