
Help this man decipher his cool creative code. - crisnoble
http://dl.dropbox.com/u/1094010/Projs/spacefill.html
======
VMG
While it's cool, I don't think it's particularly mysterious. When playing
around with graphical algorithms you step on these kinds of patterns all the
time.

~~~
madmax108
+1 to that. I remember computer graphics classes in college where we'd
basically start off with a fixed algorithm and then play around with random
variables and steps and end up with stuff that looked amazing (even though
noone really knew why!) :P

~~~
ec00
I feel like sharing the strange flood-fill algorithm i once stumbled upon:
<https://vimeo.com/699396>

~~~
joshu
Gorgeous.

------
im3w1l
A simple breakdown of the algorithm: 1\. Choose an order for the four
directions UP, DOWN, LEFT, RIGHT. 2\. Choose a starting point and add to list,
draw white. 3\. Add all neighboring points to list in order chosen in step 1.
The points that are already added (even if now removed) or would be outside
are omitted. 4\. Let n be number of points drawn white, m number of points
currently in list. Remove the point at list index n%m. 5\. Color said point
white and goto 3.

In order to understand what is going on, lets first discount the conditions
for a point not to be added to the last. Lets say they were added but ignored
when popped. We would then iterate over every other point in the list, because
for every step, we both increment our list index, AND remove one element. If
the directions at index 0, and 2 were then non opposite (i.e. left - top),
then we would flood fill in left-top direction. If the directions were
opposite we would first fill a line. Anyway, we need not go deeper into what
would happen after that, it was just to get a feel for it. Because we actually
have those two conditions for not inserting a point into the list. This could
probably be thought as giving rise to a pseudo random disturbance in our index
picking.

If this approximation is accepted the algorithm looks like this

1\. Choose an order for the four directions UP, DOWN, LEFT, RIGHT. 2\. Choose
a starting point and add to list, draw white. 3\. Add all neighboring points
to list in order chosen in step 1. 4\. Let n be number of points drawn white,
m number of points currently in list. Remove the point at list index
Math.floor(n*(1+Math.random())%m. 5\. Color said point white and goto 3.

From this we can get a feel for what is happening. The reason we proceed
outwards is that we the points we color are more and more recently added (i.e.
further out). The reason for the gaps is that we a) effectively increase our
index by 2+a random small integer.

The reason that we start filling again from the beginning is not explained by
the random model, but can be understood from the fact that in the original
algorithm, when we reach the borders, n starts increasing faster than m. This
is because there is a high chance that added points are illegal. Since n
increases faster than m, n%m will wrap around, and we start drawing points
with low indices again.

I am not completely sure about these points: I have not tested them, but they
seem a likely explanation.

~~~
akx
OP (well, discoverer of the algorithm) here:

Thing is there is no per-step randomness. The small integer (pickOffset) only
adds some variation to the algorithm, and is initialized only when you hit
Space. I'm pretty sure that if pickOffset was changed on each step(), the
magic would be lost.

What I'm wondering is why this simple change of the picking algorithm
(open.pop(nStep % open.length) instead of open.pop(0) (simple radial outward
fill)) causes this beautiful, organic sort of pattern to emerge.

~~~
gjm11
I haven't thought this through very carefully and it may be twaddle, but:

With pop(0) you have a queue, you always take out the oldest thing in it, and
so you get breadth-first search.

Instead you're doing pop(0) then pop(1) then pop(2), etc., which means you're
taking out "every other" entry from the queue. (Until the popping overtakes
the adding, which happens around about the time when a typical newly explored
pixel generates fewer than 2 new neighbours to explore, by which time most of
the interesting pattern generation has already happened.)

Now, when any pixel is processed its (not-yet-visited) neighbours get pushed
consecutively. That means that when the exploration process reaches them it'll
only explore one or two of them. In the usual "early" case there are three
such neighbours; half the time one will get explored, half the time two will.
The other neighbours get left behind until the exploration wraps (i.e., "step"
passes the end of the queue), which doesn't happen for ages.

Hence the branching tree structure you see: right at the start we explore only
two of the starting pixel's four neighbours, and the other two get left
behind; so now we have two growing "tips", and when we process each one we
either grow it one step further or split it two ways. And we don't return to
the other neighbours for ages, so the tree grows basically until it fills the
available space before the gaps start getting filled in.

(There are plenty of details not explained by the above. More thought
required.)

------
moondowner
Looks like some sort of a fractal generator, and by looking at the source, it
uses Fisher-Yates.

<http://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle>

Also this link is referenced in the code:

<http://sedition.com/perl/javascript-fy.html>

~~~
im3w1l
Look were fisher-yates is used. It is only used if you press a certain key,
otherwise not at all.

------
DanBC
Looks very similar to "Diffusion Limited Agregation". There are two main
versions - grow from centre, and grow from edge.

You have a sticky pixel, and send it on a random walk. When it hits another
pixel it sticks, and you generate another sticky pixel.

------
tezzuya
Seems like an example of a DLA (Diffusion Limited Aggregation)
<http://en.wikipedia.org/wiki/Diffusion-limited_aggregation> ?

------
recursive
If you like this, you may also enjoy Langton's Ant.

<http://en.wikipedia.org/wiki/Langtons_ant>

------
randlet
During my undergrad I wrote some simulations of ballistic deposition that
produced patterns that looked very similar to this.

edit: Here's a video someone else created of a similar looking situation
<http://www.youtube.com/watch?v=j-1uotbxrcc>

------
verroq
Looks like the "???" in <http://fallingsandgame.com/sand/hell1.html>

------
0utsider89
Hi all, basically it has a lot to do with keeping the open array expanding at
the same rate, as the open array expands the numbers that get pulled by the
selectNode function are kept at the same spacing let pattern grow at the same
rate on all size of the pattern.

------
clavalle
Looks like a variation of a rapidly exploring random tree:
<http://en.wikipedia.org/wiki/Rapidly-exploring_random_tree>

------
crisnoble
From the author:

>This is a strange flood-fill algorithm I stumbled upon a while back and I
have no idea why it does its thing like it does.

>...take a look, and if you know what's going on, let me know.

------
sangupta
Some of the patterns that get generated are really cool. Would be glad to know
how it works, so if someone does drop you a line on Twitter, please do update
all of us.

------
jburgos
Seems like an example of a Rapidly-Exploring Random Tree (RRT) algorithm,
mostly used in motion planning. Granted this display seems like the RRT*
variant.

------
jere
It's probably not this, but a neat little page with a request to analyze the
code reminds me of some other hiring puzzles.

------
46Bit
Possibly the least smart comment on this thread, but it resembles the purple
destroyer in the Falling Sand Games.

------
huhtenberg
Similar output is produced by Conway's Game of Life if its thresholds are
tweaked away from the defaults.

------
geuis
Would be helpful if the interface had UI controls. Most phones don't have
keyboards attached.

------
ether
Could you explain why this is strange? To me it just looks like a fractal
algorithm.

------
adv0r
Looks like he read Wolfram's book
<http://www.wolframscience.com/nksonline/toc.html>

~~~
crisnoble
Thanks for the link! I just added that one to my must read later list. (not to
be confused with my always growing read later list)

~~~
adv0r
I suggest you to buy the book , is a masterpiece imho

~~~
sophacles
It is a great primer, particularly if you can look past the Wolfram "I
invented everything, how awesome am I" language. Read up on some criticisms of
it too, there are some things that are over-the-top, and a lot of "ink" (and
bits..) has been spent on the subject.

~~~
adv0r
yes I hate that attitude too actually "I invented everything, how awesome am
I" . I'm not here to defend it.

But still he deserve some recognition, afterall he is the first bringing AI to
the masses with Alfa, and helped some million researcher's life with
Mathematica. And, 10 years after NKS, these are somehow good results ->
[http://blog.stephenwolfram.com/2012/05/its-been-10-years-
wha...](http://blog.stephenwolfram.com/2012/05/its-been-10-years-whats-
happened-with-a-new-kind-of-science/) ...

You can judge people from many different perspectives, but when it comes to
"getting things done", you should step back and congratulate him. again, IMHO

~~~
sophacles
_But still he deserve some recognition,_

and

 _but when it comes to "getting things done", you should step back and
congratulate him_

My first words were "it's a great primer", followed with a suggestion for how
to get the most from the work.

I'm not trying to downplay Wolfram's other work either, just pointing out that
there are issues and grains of salt to take with this particular work.

