
Sleep Sort - shinvee
http://dis.4chan.org/read/prog/1295544154
======
ChuckMcM
4chan discussion aside, the concept here is that you 'sleep' _n_ units of time
for each element, where _n_ is linearly related to the lexical relationship of
the element to the other elements in the list. The 'aha' was that the
resulting threads will 'wake up' or 'return' in lexical order.

Basically if you transform set L into the time domain, when collecting it back
you get it back sorted.

Its a fun result, and as an exercise in systems analysis it can be
enlightening to look at the impact of timer resolution, thread creation, and
ordering, but ultimately the 'trick' is that you've exploited the 'insertion
sort' that the kernel does on sleep events. You could try its close cousin
"priority sort" where you create threads where you set the priority of each to
be related to the value 'n' of the element, and all fractionally lower than
the parent thread (most UNIXes are not that flexible but some realtime OSes
are) then as the last step you 'yield' and the threads print out their
contents in priority order and poof they are sorted. But again, the sorting
happened in the kernel when they got inserted into the runq in priority order.

------
SwellJoe
I was impressed by the brevity of the C solution using OpenMP (comment #81),
especially compared to significantly more verbose Erlang and Haskell examples.
I kept reading it, thinking, "Where's the fork?" and seeing nothing but a
simple for...and then remembered that # is a preprocessor directive, rather
than a comment, in C, and then googled for "parallel omp for". So, I actually
learned something today from that ridiculous thread:
<http://en.wikipedia.org/wiki/OpenMP>

Also, the Perl 6 one-liner is sort of laughably cute . I'm not sure I believe
it would actually work, but if it does, the conciseness of Perl 6 is even
higher than I'd realized; dropping about 9 words from the Perl 5 one-liner
(which is already very small). But, I learned about the hyper operators in
Perl 6, which is also cool:
[http://perl6advent.wordpress.com/2009/12/06/day-6-going-
into...](http://perl6advent.wordpress.com/2009/12/06/day-6-going-into-
hyperspace/)

In short, that was awesome reading over my morning tea.

~~~
gazrogers
There's a few short implementations of this algorithm on CodeGolf.SE:
[http://codegolf.stackexchange.com/questions/2722/implement-s...](http://codegolf.stackexchange.com/questions/2722/implement-
sleep-sort)

~~~
SwellJoe
I'm relieved to note that Haskell and Erlang _don't_ have to be ridiculously
verbose. There's even an Erlang one-liner that is competitive with the Perl
and Ruby implementations for brevity. I was amused to note that Perl is still
the shortest implementation, and could be shorter (though the frighteningly
short Perl 6 implementation mentioned in the 4chan thread is either wrong, or
the implementation of the <<. operator is still unfinished in Rakudo Star,
because it does not actually do the job; I don't understand the hyperoperators
well enough yet to know which is true).

------
mcantor
It finally happened.

HN cut out the middle man. Instead of linking to something on Reddit about
4chan, we finally just linked straight to 4chan.

~~~
r00fus
problem with 4chan is that it's blocked at work (probably for good reason).
Now I need to find the reddit summary.

~~~
Stormbringer
Sleepsort: For each item in a list of items, create a thread that sleeps as
long as its number, then prints its number.

E.g. 5,3,2,1,10 makes five threads, one sleeps for one second and prints 1,
one sleeps for two seconds and prints 2, one sleeps for three seconds and
prints 3, one sleeps for five seconds and prints 5 and the last one sleeps for
10 seconds and prints 10.

------
Estragon
That was a brilliant joke, hardest I've laughed all day. Some choice comments:

    
    
      I don't like to wait 218382 seconds to sort '(0 218382)
    
      If you slept exp(n) instead of n it could easily include
      negative integers too*!*
    
      Oh shit, we're busted, there's a REALISTICALLY THINKING
      PERSON in this thread!

------
sbierwagen
For those few of you not conversant with 4chan, the _dis_ subdomain is text
only, like the original 2ch. The textboards tend to be disdainful of the
imageboards, and incidentally, they get much less traffic, since it's
impossible to post porn on them.

The mailto:sage links in some of the posts are essentially public downvotes;
from the Japanese "下げ", meaning "down", and pronounced "sah ge".

~~~
nateberkopec
I consider myself conversant with 4chan, and I had no idea that the dis
subdomain existed.

I always wondered if there was a chan for hackers...now I realize there is.
And oh boy, what a chan it is.

~~~
kabushikigaisha
Most decent sized chans have their /prog/ board. See shii's Overchan[1] for a
comprehensive list of chans on the WWW.

Shii personally runs a pretty nice chan called 4-ch[2] with a large and decent
DQN[3] textboard. (S)he founded this chan after being banned from 4chan and
SomethingAwful. I don't know whether shii still frequents 2ch or Futaba.

I don't consider the English chans online for programming to be all that
great. Most (if not all) are textboards with few postings and lots of trolling
over the last 3 years. I don't know the state of the huge Japanese ones
however, as I do not frequent them or read the language.

[1] <http://shii.org/2ch/>

[2] <http://4-ch.net/4ch>

[3] 4-ch.net/dqn/

~~~
mukyu
Squeeks runs 4-ch, not shii.

edit:

also, shii doesn't run overchan either--Mohey Pori does

from what I remember, someone DDoS'd it in their war against all imageboards
(can't remember who) and shii ended up hosting it

------
wtracy
Many of the posts here and on 4chan have deconstructed the algorithm and
proven that it is actually O(n^2) or O(n * lg n) when you include the work
done by the operating system's scheduler.

However, here's a different perspective to look at this from: What if this
were implemented on an FPGA?

Let's ignore for a moment that the number of gates needed would increase
linearly with the size of the input. I'll also simplify the math by assuming
that a list of n numbers contains only values in the range [0 .. n] .

Let's further assume that we're only sorting positive integers, and that each
job sleeps for x clock cycles (where x in the input number). We could sort a
list in O(n) time.

Digging even deeper: Quicksort performs an integer comparison followed by a
branch O(n lg n) times. Even if your processor can compare two integers in one
clock cycle, a branch is probably going to screw up your processor's
pipelining and take quite a few clock cycles to execute. So, we're possibly
looking at a significant speed increase even before we consider the difference
in order of growth.

So, assuming a predefined upper bound on the number of items in a list, this
just might be a great way to perform hardware-assisted sorts.

Thoughts?

~~~
wtracy
I just realized: We would need a circuit polling each of the jobs to see when
they finish, and I don't see how it could poll all of jobs every clock cycle.
So I don't see it being possible to achieve the per-cycle resolution I
suggested above.

Furthermore, the polling circuit would have to poll each of the n tasks n
times, leading us finally back to a running time of O(n^2).

Still a fun thought experiment, though.

~~~
FlowerPower
Why would you poll the tasks? Cant the tasks wake up after their time is done,
just fire an interrupt witht he number of the task.

------
TeMPOraL
Reminds me of "ListBox sort" story I heard once; it was about a programmer
who, wanting to sort strings and not knowing how to do this, put strings into
a hidden ListBox GUI control and then read them back in sorted order.

~~~
comex
When I was, like, 10, I wanted to find the distance between two points in a
Visual Basic program but didn't know how, so I created a Line control between
the points and tried to read back the Width. (It didn't work, since Width was
just the thickness of the line.)

------
xorglorb
It's actually an interesting solution. If you reduced the time waited to a
microsecond, then it would take about a second to sort 1M elements. Definitely
not an optimal solution, and has a bit of potential for race conditions (if
one thread/process got hung up on I/O), but interesting none the less.

~~~
forgotusername
At that resolution a modern OS is likely to round up the delay to some energy-
saving number, and the run queue is going to be so contended that even if the
timers fired at the right time, there would be little guarantee the processes
awoke in the desired order.

I think on Linux the granularity thing is true for e.g. nanosleep(), but there
is also the POSIX clock_*() functions, which I think guarantee higher
resolution.

~~~
leif
higher resolution, but I don't think precision (especially in the face of I/O)

------
justin_vanw
This isn't interesting at all. every thread sleeps for that long. When you
sleep, something like this happens:

The OS is told to wake up your thread after x time has elapsed. It puts this
request into some data structure which it does something like poll from time
to time so it knows what thread to wake up at what time. You are offloading
your sort to the OS, which is probably then doing something like putting it
into a linked list of timeouts that have been registered (maybe a heap?).
After approximately the allotted time, some mechanism akin to a signal is used
to tell your thread to wake up.

Also, usually the thread will just sleep AT LEAST as long as you tell it to
sleep, but if the system is busy it could sleep a long time more. Threads can
be woken up at the same time or slightly out of order, or 'woken up' only to
find that all the cpu's are working on something else, and thus waiting for a
time slice, getting one, then waiting again, and not getting to use stdout for
_weeks_. It's uncommon to see machines with really high load these days, but a
1 core computer with 25000 processes all competing for CPU time, and which is
swapping heavily, will potentially run each process millions of times slower
than running one process.

TLDR; This is curious, but will fail much of the time, has huge memory
overhead. Lots of good sorts are in place, this one requires something like
8mb per thread on linux due to the stack allocated for the thread. At best
this takes millions of years potentially, and just does some other sort
anyway.

~~~
pbiggar
8mb per thread is just constant overhead.

~~~
leif
constant in the number of elements, sub-constant in the size of the input

~~~
JadeNB
I think maybe it's been too long since I last did analysis of algorithms … the
only sensible meaning that I can assign to 'sub-constant' is that the
contribution tends to `0` as the input grows; but it seems to me that `8
mb/thread * ≥ 1 thread` is certainly `Ω(1)`. (Or maybe I just missed a dry
joke?)

~~~
leif
8 MB is constant, but the number of bits in the input grows like log of the
largest input value, so the 8mb overhead contribution goes down as the input
grows.

Of course, the kernel isn't going to support more than 64 bits of sleep times
any time soon, so it's okay if you want to say it's constant. ;-)

~~~
JadeNB
I'm sorry … surely that just means that the _relative_ overhead tends to `0`,
which one should express by saying that the _absolute_ overhead is sub-linear
(constant, in this case)?

~~~
leif
the absolute overhead is linear in the number of elements, sub-linear in the
size of input

------
liedra
I used to work as an editor for freshmeat.net and we would get hilarious
submissions like this quite frequently. For example, the "revolutionary new
file compression algorithm" that quite literally converted a file into binary
and removed all the 0s. Such a shame that the author "hadn't gotten around to"
writing a de-compression tool!

We were often quite sad that we _couldn't_ publish some of these gems, they
were brilliant.

------
Sukotto
Would someone please summarize the article?

There's no way I'm clicking a 4chan link from here at work.

~~~
SwellJoe
It is currently SFW (it seems to have images disabled).

But, the post that started it all is:

Man, am I a genius. Check out this sorting algorithm I just invented.

    
    
      #!/bin/bash
      function f() {
          sleep "$1"
          echo "$1"
      }
      while [ -n "$1" ]
      do
          f "$1" &
          shift
      done
      wait
    
      example usage:
      ./sleepsort.bash 5 3 6 3 6 3 1 4 7

~~~
DarkShikari
_It is currently SFW (it seems to have images disabled)._

/prog/ is a text-only board.

~~~
aw3c2
There is an ASCII swastike with racist slur in the thread now.

------
kabdib
One of my less well received interview questions is: "Okay, we both know
theoretical lower limits on sorting. Can you come up with a terminating
algorithm that is most /pessimal/?"

Usually I get a blank stare. Sometimes I get a great answer.

~~~
sanj
I always like the "shaking box" algorithm:

1\. Check if list ordered. If it is, we're done!

2\. Randomize order of list. Go back to step 1.

N!

~~~
TeMPOraL
Worst case O(\inf), it's known as 'random sort' or 'bogosort'.

Funny thing, it can theoretically be O(1) on quantum computers, assuming the
multiple-universe interpretation of quantum mechanics:
<http://en.wikipedia.org/wiki/Bogosort#Quantum_bogosort> ;).

~~~
random42
Is randomizing a list of n entities once is an O(n) operation. not sure how it
would be done in O(1) on quantum computers.

~~~
TeMPOraL
The link that Dove gave suggests there's an idea of quantum randomization that
will split our universe into n! universes in constant time.

------
azim
What's interesting is that in theory, assuming an appropriate scaling factor
to divide the input by, the time and space complexity for Sleep sort are O(1).
This is basically like hashing numbers, then iterating the buckets -- except
that the iteration is done over time. Given that, one could envision
degenerative cases for which Sleep sort could in theory outperform a standard
algorithms like quicksort.

------
_delirium
Interesting idea I hadn't seen before. It's basically bucket sort, but using
the scheduler to hold the bins.

~~~
_delirium
Correcting myself: while that's what I think of it conceptually (everything is
placed into integer-valued "bins" via the integer parameter taken by sleep()),
implementation-wise it's probably equivalent to heapsort or something similar,
depending on whether the scheduler stores sleeping threads in a heap or some
other data structure.

------
IgorPartola
Optimize it by making the sleep time be log(x) instead of x or a fraction of
x. That makes it O(log n) complexity :). Use any monotone increasing function
really.

------
joe24pack
Of course it's brilliant, it just delegates the sorting to the scheduler. The
OP on 4chan is ready for a PHB position at a large corporation.

------
drobilla
Hilarious, though not strictly correct. There is no guarantee 'sleep n' sleeps
for n seconds, or that 'sleep n' wakes up before 'sleep n+1'. Granted, in
reality if scheduling is so far off that things like this on the order of
seconds become a problem, you've probably got bigger problems...

------
zmitri
Guys, isn't this just another sorting problem? Aren't you just relying on the
thread scheduler to sort them?

------
justin_vanw
Are those of you whom are saying there is _anything at all_ good or usable
about this approach just trolling? _Pleaaase_ just be trolling, if my opinion
of humanity goes any lower I'll have to start building a 'compound'.

~~~
smosher
Note to future historians: In the age before commercialized internet access
humans found humor in _absurdism_ , _irony_ , _satire_ and sometimes even
_sarcasm_. Today these forms are no longer considered humorous but malicious
and offensive, and are now classified under the blanket term _trolling_.

~~~
ltbarcly3
Yikes, I guess you got me. I didn't realize that attempting to piss someone
off anonymously for your own amusement was _satire_. Your ideas intrigue me
and I am interested in subscribing to your newsletter.

~~~
smosher
I was hoping you'd see the irony in my reply. I didn't mean any offense.
(Thanks for illustrating my point though.)

------
peregrine
I think the more interesting solutions are deeper in the thread mainly #43,
#44, #83.

Mostly just for fun but its neat to see an Erlang solution since this should
be right up its alley.

~~~
ctdonath
Please share for the net-nannied among us.

~~~
ricardobeat
javascript version for you:

    
    
       var numbers  = process.argv.slice(2)
         , output   = []
         , negative = []
    
       for (var i=0, ln=numbers.length; i<ln; i++){
          var n = +numbers[i]
          setTimeout((function(n){ return function(){
             if (n<0) negative.unshift(n)
             else output.push(n)
             if (output.length + negative.length === numbers.length) {
                return console.log(negative.concat(output))
             }
          }})(n), Math.abs(n)/1000)
       }
    

Works with negative numbers.

    
    
        $ node sleepsort -2 1 0.1 4 -1 7 -3 9
        [ -3, -2, -1, 0.1, 1, 4, 7, 9 ]

~~~
nlco
your code can be simplified a bit:

    
    
        function sleep_sort (inputs) {
          function child (number) {
            setTimeout(function () {console.log(number)}, Math.pow(2,number))
          }
    
          for (var i = 0; i < inputs.length; ++i)
            child(inputs[i])
        }
    
        sleep_sort(process.argv.slice(2));

~~~
ricardobeat
That won't work for numbers < -5 and has a much longer worst case.

------
raymondh
Python version: <http://code.activestate.com/recipes/577756-sleepsort/>

------
nick_urban
I don't think this is meant to be serious.

Maybe on a quantum computer...

------
GBond
Wow, 4chan on HN frontpage is a milestone. I recall reading somewhere 4chan
(among few other sites) articles are auto-flagged when submitted.

------
allochthon
It's a joke, guys.

~~~
peterwwillis
Too bad it's not a funny one

------
Stormbringer
Wow. An order 0 sorting algorithm (it is O(0) not O(n) because it doesn't make
_any_ comparisons). Amazing.

~~~
arkem
Just because a sorting algorithm doesn't use any comparisons doesn't make it
O(0).

For example Radix Sort (<http://en.wikipedia.org/wiki/Radix_sort>) is not a
comparison based sort and its complexity is O(k.n) where k is the max size of
elements.

------
random42
I dont think this sort is dependable for sorting floats in close ranges. I
tried it with, ./sleepsort.bash 1 1.01 1.01 1.02

and output was

1

1.01

1.02

1.01

Clever hack, nonetheless.

------
rajasharan
my first reaction was it's going to be O(9^n) worst case - but there is &
after f "$1" which spawns its own process for each number. Not bad - maybe its
a good algo if all are single digits

~~~
srl
Of course, the proper algorithm, when all are single digits, is to use an
array [10]int.

------
uniclaude
Sorry, can't open the link here (blocked). May anyone please give a cached
version? (or even a link to a full page screenshot, a la 4chan). \- Thanks

------
Andi
Do you see the hidden narrative?

