

A practical use for space-filling curves - RiderOfGiraffes
http://www2.isye.gatech.edu/~jjb/mow/mow.html

======
samlittlewood
Another good example is using Morton Order for things like texture maps in
graphics - it is easy to derive a memory address by interleaving the bits of
the x,y coords:

<http://www.devmaster.net/forums/showthread.php?t=10125>

<http://en.wikipedia.org/wiki/Z-order_curve>

From distant memory - switching to a variant of this from simple row major
order textures gave > 20% increase in a scene rendering benchmark (software
pipe, full scene, no game logic). Prior to that I had been doing daft things
like having textures stored in column or row major order depending on their
'usual' orientation.

~~~
enjalot
I came here to say this. Not just graphics, but indexing arrays in general on
the GPU for use in OpenCL or CUDA. We are working on optimizing our neighbor
searches using this.

A good book: Foundations of Multidimensional and Metric Data Structures by
Hanan Samet 2006

A good paper: Interactive SPH Simulation and Rendering on the GPU by Goswami

~~~
gruseom
Samet's book is the definitive reference, but I have trouble understanding his
explanations. They do a good job of summarizing what's distinctive about each
technique relative to the others, but a poor job of explaining what the
techniques _are_. This is a pitfall of technical writing, especially by
experts: it's easy to produce something that's intelligible only if you
already know the material, which defeats the purpose (assuming your purpose is
explanatory and not only compository). Still, the book is indispensable as a
reference.

~~~
arnoldoMuller
When I was building OBsearch.net I used this book extensively. I grant you
that some explanations are not so detailed. On the other hand, the
explanations of the most relevant techniques are quite good. Check for
instance the section that explains LSH or the explanation about Navarro's SAT.

------
psykotic
There are lots of practical uses. One that immediately comes to mind is cache-
friendly layout of power-of-two textures, usually called texture swizzling.
Another application is designing clock trees on VLSI chips to minimize clock
skew; that's actually a space-filling tree rather than a curve, but it's
closely related.

------
pascal_cuoq
"here is the tradeoff: Use our heuristic and you get a reasonable route
immediately. Alternatively, configure a network of 110 processors, then spend
two months computing the shortest route to save a month of driving."

Or, of course, use one of the thousands of other heuristics, most of which
will likely provide as good an answer in as short a time as this heuristic. I
mean, why provide comparisons to comparable heuristics? That would be like
doing science.

~~~
RiderOfGiraffes
If you're interested in such a comparison, Google is your friend. Here, I've
tried this search:

[http://www.google.com/search?q=comparison+of+heuristics+for+...](http://www.google.com/search?q=comparison+of+heuristics+for+the+TSP)

It depends on what you actually want, but here are two links that might be the
sort of thing you're looking for:

<http://www.agent.ai/doc/upload/200302/stut00.pdf>

[http://www.psych.lancs.ac.uk/people/uploads/TomOrmerod200712...](http://www.psych.lancs.ac.uk/people/uploads/TomOrmerod20071220T113800.pdf)

But let me just say that I found your reply to be unnecessarily nasty in its
tone - snippy and snarky. You seem simply to be criticising the authors
because they produced a brief paper describing a particular heuristic, and you
seem simply to be demanding that they do go further than they have in this
paper. Perhaps they already have, perhaps it's work in process.

I find it frustrating when people criticise a link largely because it is what
it is and isn't something else. If you want something on comparisons, go find
it. Or alternatively, do the work yourself. Then regardless, come and share
your results.

~~~
_delirium
It's a bit annoying to me mainly because this sort of thing is really common
among heuristics developers (even in papers submitted to
conferences/journals). They develop a heuristic for a problem, and then show
that it beats the exact, guaranteed optimal algorithmic solution. But that's
only the right comparison if it's the first-ever heuristic for a problem!
Otherwise it's a bit of a straw man, because its real competitors are other
existing heuristics and randomized or approximate algorithms for that problem,
and/or general, domain-independent heuristics like randomized hill-climbing.

Apart from the claim that it beats competitors, it's a really interesting
post, though; light-weight heuristics based on clever connections are
interesting in their own right. (So I sort of agree on the tone.)

~~~
DarkShikari
This is extremely common in compression algorithms "research" as well. At
least in video compression, the vast majority of papers are complete tripe,
comparing only to algorithms known to be bad (i.e. an exhaustive search). You
can easily spot the few good papers simply by looking for the ones that
compare themselves to good algorithms.

------
jjb3
Hello All! I am one of the authors. Thanks for the interesting comments. The
web page is not intended to be a study but rather an enticement to read the
technical papers referenced therein. The JACM paper originally included
detailed comparisons with alternative heuristics but the referees thought it
made the paper too long. Quick summary: Few of the alternatives had been
analyzed for expected performance and most had worst worst-case performance.
But this was all 25 years ago and could be updated. Rgds, JJB

------
rcthompson
You can also compactly visualize an extremely long vector by mapping it onto a
space-filling curve: <http://www.ebi.ac.uk/huber-srv/hilbert/>

I'm using this right now to visualize sequence coverage depth across the human
genome (an integer vector of length 3 billion).

------
kang
> _that are about 25% longer than optimum_

Is this a property or mere probability? What are the random point sets for TSP
testing? Given n points in a plane , the number of Hamiltonian paths can be
drawn is (n-1)!. One of these is minimum, and our optimum solution to TSP. If
we distribute the frequencies of the lengths of these graphs, (is there a
particular name for this graph?) we will find the probability of a particular
length-range occurring. This graph, should be different for different complete
graphs. So an improvement might simply be that the 'high ridges' on such graph
of input of random sets given , lay towards the left sides of 'peak'.

> _A useful property of a spacefilling curve is that it tends to visit all the
> points in a region once it has entered that region. Thus points that are
> close together in the plane will tend to be close together in appearance
> along the curve._

Or very far! Like points around averaged areas (like middle lines, etc) and
spacefilling curves do generate averaged areas. Eg yellow points around blue
middle lines here <http://i.imgur.com/Dl7Vo.gif>

NB: I do not mean to criticize. asking questions.

~~~
Groxx
Almost certainly a probability. You could probably prove that it's not worse
than a certain amount, because it won't jump the farthest distance every time
in anything but a truly trivial graph, but a value like that is unlikely to be
an actual limit.

A reason for my reasoning: they visit things in a certain order. It's
therefore possible to design a path that falls directly on non-optimal points
along a certain order of that curve; I'd guess you could get multiple times
the shortest length, instead of just 25% greater. _But_ you can get a new path
by simply increasing / decreasing the order of the curve, so running through a
few iterations could probably get you an exceptionally-good result for the
sort of speed it provides.

~~~
kang
Sorry , I think, I haven't been clear. My question is 'What are the input of
random sets given as input?' because I feel if we write ANY algorithm, however
bad it may be, until it gives us a path, saying that that it is only x% longer
is a property of the input given by us. I am saying that x could turn out to
be 25 in most of the cases who knows.

------
bayareaguy
Space filling curves are also good for building multi-dimensional indexes for
databases.

<http://www.dcs.bbk.ac.uk/TriStarp/pubs/bncod17.pdf>

~~~
arnoldoMuller
If your dimensionality is low then I totally agree, there is a library that
can help you to achieve this: <http://code.google.com/p/uzaygezen/> If you are
dealing with objects with intrinsic dimension >=20 then you should use other
techniques.

------
jpadvo
Yet another example is organizing color libraries / color swatches:

<http://people.csail.mit.edu/jaffer/Color/CSDR>

~~~
ableal
Another example of Hilbert curves used for trawling the RGB cube:
<http://corte.si/posts/code/sortvis-fruitsalad/index.html>

(And, of course, IP addresses: <http://xkcd.com/195/>)

------
saalweachter
This seems similar to the pyramid model of Graham, Joshi, Pizlo (2000).

Modeling how (they think) human vision works, they made a heuristic based on
clustering and top-down refinement. The map is clustered into a small handful
of groups, and these groups are placed in a shortest tour. Each cluster is
then clustered in turn into another handful of sub-groups, and these sub-
groups are inserted into the tour at the position of their parent.

The pyramid algorithm is highly parallel with, IIRC, the same time
characteristics as the space-filling curve algorithm.

What made the connection in my mind was that the simplest implementation of
the pyramid algorithm is to cluster by simply dividing the map into quadrants;
when you do so, you end up with the same space-filling curve they use at the
top of the page.

------
alextingle
That's genius. So simple and elegant.

------
robinhouston
Space-filling curves can also be used for spatial indexing. Nick Johnson wrote
a nice introduction to the technique on his blog:
[http://blog.notdot.net/2009/11/Damn-Cool-Algorithms-
Spatial-...](http://blog.notdot.net/2009/11/Damn-Cool-Algorithms-Spatial-
indexing-with-Quadtrees-and-Hilbert-Curves)

